Creative Practice Post# 3: Analysis-ARkit vs ARcore vs Unity
Updated: Dec 10, 2019
They are the two biggest AR SDK engines and let’s take a look at the ARKit first:
•Outstanding Tracking – unrivaled tracking with stat collected from motion sensors to locate the position of a device in the real world.
•Plane Detection – It is able to identify pretty much any object within its surroundings •Rendering – It can be integrated without any problems at all with various technologies that support both Unity and Unreal Engine.
What's new in ARkit 3.0? (06 / 2019 updated)
•People Occlusion – Now AR content realistically passes behind and in front of people in the real world, making AR experiences more immersive while also enabling green screen-style effects in almost any environment.
•Motion Capture – Capture the motion of a person in real-time with a single camera. By understanding body position and movement as a series of joints and bones, you can use motion and poses as an input to the AR experience — placing people at the center of AR.
•Simultaneous Front and Back Camera - Now you can simultaneously use face and world tracking on the front and back cameras, opening up new possibilities. For example, users can interact with AR content in the back camera view using just their faces.
•Multiple Face Tracking-Now ARKit Face Tracking tracks up to three faces at once, using the TrueDepth camera on iPhone X, iPhone XS, iPhone XS Max, iPhone XR, and iPad Pro to power front-facing camera experiences like Memoji and Snapchat.
•Collaborative Sessions - With live collaborative sessions between multiple people, you can build a collaborative world map, making it faster for you to develop AR experiences and for users to get into shared AR experiences like multiplayer games.
•Additional Improvements - Detect up to 100 images at a time, and get an automatic estimate of the physical size of the image. 3D-object detection is more robust, as objects are better recognized in complex environments. And now, machine learning is used to detect planes in the environment even faster.
Google, also a competitor for the best AR platform title, countered with the ARCore:
•Catching Movements – This is one of the most innovative features. It uses all the data that is obtained from the sensors to determine the device positioning.
•Light Detection – It senses the overall characteristics of the light and makes the picture sharper by accounting for missing or extra light. For example, if you are in a dim room, it will automatically adjust the picture with extra light.
•User Engagement – The ARCore detects intersecting rays of light through the device’s camera
•Anchoring Objects – For an object to appear virtual object in its proper place, the ARCore sets an anchor, which gives it the ability to monitor an object’s displacement.
What's new in ARcore v1.13.0? (May / 2019 updated)
•Cloud anchors - Using Cloud Anchors, your app lets users add virtual objects to an AR scene. Multiple users can then view and interact with these objects simultaneously from different positions in a shared physical space. Cloud Anchors are similar in behavior and function to anchors but differ in that they’re hosted on the ARCore Cloud Anchor API service. This hosting enables users to share experiences.
•Environmental HDR in Sceneform - The Lighting Estimation API analyzes a given image for discrete visual cues and provides detailed information about the lighting in a given scene. You can then use this information when rendering virtual objects to light them under the same conditions as the scene they're placed in, making these objects feel more realistic and enhancing the immersive experience for users.