If you’re an iOS developer you have probably seen the number 44 all over the place. The size of buttons, the size of navigation bars, the size of cells etc etc. But why are so many sizes 44 pixels?
Bridging images in a UIImage into other API’s such as OpenCV can be painful, however there is 1 format that every image processing library understands, and that is the format of “raw”, where each pixel is represented as an unsigned byte in an array. Unless you are dealing with video formats, which can use YUV,
I have recently had a project where I had to do some image processing on an image based on where a user touches an image (in a UIImageView). Getting the touch coordinates was easy enough, but the challenge was turning those touch coordinates into pixel coordinates. Depending on the way the UIImageView is set to