By Tyler Keenan
Apple’s latest mobile update, iOS 11, goes way beyond the normal slate of improvements and optimizations. Specifically, it allows iOS developers to take full advantage of two of the biggest trends in development today: augmented reality and machine learning. In this post, we’ll take a look at both, as well as do a quick roundup of some of the other features mobile developers should take note of.
Build an Immersive Experience with ARKit
Augmented reality has the potential to dramatically re-shape everything from consumer entertainment to industrial manufacturing. Now, with the ARKit platform included in iOS 11, Apple has brought the current bleeding edge of AR capabilities to consumer app developers. Where previous AR experiences largely consisted of 2D overlays, ARKit gives apps a sophisticated understanding of the scenes they’re in. It can detect horizontal and vertical planes, allowing apps to accurately place and rotate objects relative to the user’s position, and detect ambient light levels in order to apply lighting or shadow effects.
To illustrate the kind of advancements we’re talking about, check out the more recent updates to Pokémon Go. In the original game, the various critters players chased remained in a fixed position no matter how the user moved their phone around. Since integrating the game with ARKit, however, the game is able to understand the player’s position relative to their Pokémon. This feature, called positional tracking, allows the game to automatically adjust a Pokémon’s size and orientation depending on how a player approaches it.
From a gameplay perspective, this allows players to sneak up on Pokémon from behind. More than that, however, positional tracking represents a major advance in mobile AR capabilities that other developers can apply to any number of non-Pokémon-related uses.
Bring Machine Learning to Your App with CoreML
Up to now, app developers who’ve wanted to take advantage of machine learning algorithms have needed to perform the calculations on the server-side, which can result in performance delays and obviously requires a stable connection. With CoreML, however, iOS 11 has made it possible for developers to implement any number of machine learning frameworks inside the apps themselves, taking advantage of the iPhone’s hardware.
We’ve written quite a bit about various machine learning frameworks, and it should be exciting for app developers to see many of those frameworks supported by CoreML, including artificial neural networks up to 30 layers deep. Beyond that, CoreML adds specific support for two of the most important subfields of machine learning: computer vision and natural language processing.
From a developer’s standpoint, what matters is that apps can now create apps that can track and recognize faces, text, objects, and barcodes while also automatically recognizing different languages and parsing parts of speech.
Note: Apps that take advantage of AR and machine learning require a lot of processing power. The latest iPhone processor (the A11) features what Apple calls a “neural engine,” essentially a mini processor optimized for certain kinds of machine learning methods and handling the kind of calculations needed to analyze …read more
Read more here:: B2CMarketingInsider