Unfortunately Apple does not give developers access to the ambient light sensor on the top of it’s iOS devices (used to measure brightness and adjust the screens brightness accordingly), and when I say access I mean direct access to it’s output rather the proximity state that can be inferred from it. This leaves us with
Objective C categories are great for extending classes, however if you want to override methods then you’re going to have some problems as you can no longer call the method on your original class owns, usually breaking a lot of functionality the higher up the class food chain you travel (try this on NSObject‘s init, return nil). If you called the same method on self you would end up with an infinite loop, and calling super will skip the original class completely.
Something you may have seen with apps from the App Store or even your own apps is how they take a very long time to open. The common reason behind this is that a lot of initialization code is being run when it doesn’t need to. For example some apps might initialize 10 UIViewControllers when
I have recently had a project where I had to do some image processing on an image based on where a user touches an image (in a UIImageView). Getting the touch coordinates was easy enough, but the challenge was turning those touch coordinates into pixel coordinates. Depending on the way the UIImageView is set to
Previously I wrote on using CGPDF to display PDF document pages like you would display images. Well now we are going to get into something that goes beyond what most PDF apps on iOS do, OCR. There are a couple of different options when it comes to OCR libraries, but my research indicates that Tesseract
I was working on an app last week that had a strange problem, it would without warning lose some of its subviews during the course of using it. With little documentation of why this was happening it was a tricky problem to solve, but I managed to figure it out eventually. The first clue was