• Me
  • Tutorials
  • Blog
  • Little Bits
  • Poker

Luke Parham

iOS Developer + Other Stuff

  • Me
  • Tutorials
  • Blog
  • Little Bits
  • Poker

⌘+r

Recently at Fyuse, we've started breaking out our main functionality into a public SDK that clients can use to view and create fyuses in their own apps.  

Ok so first things first, let's pull our camera and fyuse viewing code out into their own frameworks.  Should be easy right?  Well no, especially not when you've got a 4 year old app that relies on internal computer vision libraries big enough to take 30+ minutes to build on your perfectly adequate 13" Macbook Pro.  And definitely not when the app was built in startup land where, let's face it, "working" usually edges out well-factored.

This means we've really had to dig into our internal dependencies and think about how we wanted things to be structured.  For me, knowing exactly how all the build systems an app relies on has been a bit of a weak spot for a while so this has been a good opportunity to really sit down and think about what happens when I hit the "run" button.

Compile Step

The first thing that happens when you hit "run" is that Xcode will start compiling each source file included in your app's target as well as any dependencies targets.  The files that will get compiled are listed in Build Phases > Compile Sources. If there are any syntax errors, you'll get a build error for a failed compilation.

For each source file, the compile step will create a compiled binary object file.

Linker Step

Next, all of the .o (object) files that were generated during the compile step are linked together into a final archive or ipa. 

Linker errors are, in general, less common and can end up being a bit more annoying to track down. Basically, your app thinks it should have a definition for some symbol and when it gets to this step the actual definition is nowhere to be found.

I've seen this when symbols were renamed with #defines or when a static lib was compiled with dynamic dependencies that weren't included in the target app.  Originally, static libs in iOS only contained their own code and always expected client apps to pull in dependencies for them, though this might have changed with Xcode 9.

Ways to Include Dependencies

Subprojects and Workspaces

A project is something that Xcode can actually open and contains a list of targets that can be built.

A target is something that has a set of headers and source files as well as specific set of configurations set in Build Settings.  Most commonly targets will output either a compiled framework, static library, or executable app.

Traditionally, the main way of including dependencies from source was to add a subproject to your app.  In the case where you have a lot of projects that all depend on each other, you can use a workspace to eliminate the need to add a duplicate copy of the subproject to each other project that depends on it. 

The pros of adding a dependency as a subproject are that you know exactly what code you're putting into your app and you can still set breakpoints in the code you've included.  If your dependency is a 3rd-party library, then a big downside can be that any updates you want to do need to be done by hand, though this does force you to be very intentional about your updates.

Precompiled Libraries and Frameworks

Alternatively, if you don't care about having source files for your dependencies, you can compile your dependencies as either a static library with accompanying headers or a dynamic framework, which contains the headers in the framework bundle.

These are both added to your project in the Frameworks folder and will need to be added to your app by adding them in the Linked Frameworks and Binaries section.  If you chose a static lib you'll need to add the path to the lib's headers to your Header Search Paths section in Build Settings. If you went with a framework, then you need to add the path to the framework to the Framework Search Paths list.

In general, I don't really see a reason to use dynamic frameworks for iOS apps. On a Mac they make sense since the system can share dynamic libraries and avoid loading them more than once, but for iOS the only difference I  see is that they make your app take longer to launch.  Totally possible I'm missing some big advantage but as far as I can tell there is none.

The big advantage here is that this pre-compilation can mean saving tons of time if your dependency is relatively large and rarely changes.

Automating Your Dependencies

If you don't care to manually manage subprojects or precompiled libs, you do have some other options. 

1) Cocoapods: A super popular option is Cocoapods. To use it, all you need to do is define a Podfile for your app that has a list of frameworks you want to use. Then, you just run "pod install" and Cocoapods will magically pull down the framework code, generate a special "pods" project as well as a containing workspace that allows your app to use the pods project as a dependency. Each framework you've specified will show up as a target inside the pods project with build settings configured based on that frameworks podspec file.

2) Carthage: A popular competitor to Cocoapods has been Carthage. The difference here is that you define a Cartfile that specifies a list of frameworks, and then Carthage will pull down and build .framework binaries for you. You then need to include these binaries as dependencies in your app manually. 

People seem to like that things are a little less magical with Carthage, but you do give up access to the source code (and thus, breakpoints/easy debugging) while gaining the ability to feel like you have more fine-grained control of your dependencies, so which one you choose really depends on where your priorities lie as well as how much control you feel you need.

Conclusion

I'll be honest, this isn't the most interesting topic in the world, but it's definitely something that you'll need to deal with at some point, so it's nice to know what your options are and what the pros and cons are of these options.

Friday 10.27.17
Posted by Luke Parham
Comments: 1
 

iOS: Rendering the UI

Ever wondered how exactly the system renders your app's UI?  I don't care how you answered that.  Keep reading.

The Main Run Loop

The main run loop is what is responsible for making a lot of things happen at each frame of your app’s existence.  Since this is an environment where the user can interact with the system, there has to be a mechanism to collect, and then react to, events.  The run loop’s goal is to do the app work, pass a CATransaction to the backend, and move to the “waiting” state as soon as possible.

If the RunLoop finishes its work before it's time to render the next frame, it will rest until more work comes in, but it will wake up and continue working if something comes in before the bottom of the current run loop cycle.

Running at 60 FPS

In the time each frame gets (~16 ms) there is App work and Render Server work to be done.  When it’s all said and done, your app can really only use 5~10 ms of that time.  If it turns out that the work you've scheduled can't be completed in that amount of time, your app will miss the next display refresh which means that that frame has been dropped.

The Work to be Done

During each frame, the run loop will perform any blocks that were dispatched to the main queue and any touches that occurred will be processed.  Then, at the bottom of the run loop, if necessary, a CATransaction will be created and sent to the Render Server via IPC.

A transaction is created any time any change should happen on-screen.  Say you set the bounds of a view; at the bottom of this iteration of the run loop, a transaction will be created and punted off to the render server for processing.

When a transaction is sent to the backend, the entire layer tree is analyzed and re-rendered.  This is why subtree rasterization is so useful since you’re rendering one layer instead of however many sublayers you would have had.

Finally, if necessary, the GPU work is done.  This involves things such as cornerRadius, borderWidth, shadowPath, maskLayer, all of which flood the GPU with work.  Layer blending is also GPU bound and matters a lot on older devices.  This is all known as “offscreen rendering” and also affects performance.

With all this in mind, it's interesting to know that animations use largely the same mechanism to get things moving.

The Way Client Side Animations Work

When an animation is created in Pop, UIKit Dynamics, or most commonly a UIScrollView, a CADisplayLink is set up so that the layer can be changed at a constant interval between the beginning state and ending state.  The new bounds are calculated using whatever physics equation is being used and a CATransaction will be created and sent to the render server as if you had set the bounds manually.

The Way “Normal” Animations Work (Back-End Animations)

When a UIKit animation is created, it is itself, a CATransaction that is sent to the render server.  Then all subsequent steps in the animation, as well as the physics calculations occur on the backend regardless of what’s going on in your app’s threads.  This is why it’s hard to react to user input and change the course of one of these animations.  There is IPC involved so naturally it can’t exactly be instant.

On the Render Server, a CADisplayLink is maintained so that it can keep track of your app’s refresh rate itself and calculate the new bounds at each refresh from there.

Fun Fact: At the point the main run loop goes to the “before waiting” state, if you’ve called -setneedsLayout, it will call -layoutSubviews for you.  In the context of a collection or table view, -layoutSubviews is then what calls -cellForRowAtIndexPath: as new cells should be coming on screen. 

You can't tell me that wasn't worth it!

 

Thursday 05.26.16
Posted by Luke Parham
Comments: 1
 
Newer / Older

Powered by Squarespace.