• Home
  • About
  • Our Apps
  • Learn
  • Contact
Login

Register

Login
Seemu Apps Seemu Apps
  • Home
  • About
  • Our Apps
  • Learn
  • Contact

Swift ARKit (Augmented Reality) Tutorial

Home ios Swift ARKit (Augmented Reality) Tutorial

Swift ARKit (Augmented Reality) Tutorial

Jul 3, 2017 | Posted by Andrew | ios, swift, tutorial, xcode |

In this tutorial we take a look at the newly ARKit introduced at WWDC 2017. You can use it with Swift 4 & iOS11 onwards. ARKit allows you to easily create augmented reality apps that have good performance.

Storyboard Setup

First of all setup the Main.storyboard as follows:

  • ARSCNView with constraints to take up the full screen
  • Add Cube button with constraints to stay in the bottom left
  • Add Cup button with constraints to stay in the bottom right

Now open up the assistant editor and connect the objects we just place as follows to the ViewController.swift class.

  • ARSCNView to an outlet named sceneView
  • Add Cube to an action named addCube
  • Add Cup to an action named addCup

Setup

In viewDidLoad add the following, it sets up the ARKit session. This allows you to use your camera and look through the world and see augmented reality.

        let configuration = ARWorldTrackingSessionConfiguration()
        configuration.planeDetection = .horizontal
        sceneView.session.run(configuration)

Adding a cube

To add a cube change the following function co add a new cube.

    @IBAction func addCube(_ sender: Any) {
        let cubeNode = SCNNode(geometry: SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0))
        cubeNode.position = SCNVector3(0, 0, -0.2)
        sceneView.scene.rootNode.addChildNode(cubeNode)
    }

Now to run your app you need to run it on an actual device, as the simulator does not have a camera. It will need to have iOS 11 as well. Also under your app target add the key “Privacy – Camera Usage Description” and a value which iOS will prompt to access the users camera. Ours is “We need your camera to show you cool AR stuff”

Once you have done this you can run the app on your phone. Woohooo! We can place a cube in augmented reality! How cool is that!

But wait a minute, the add cube will only ever one cube? Whats happening?

In a nutshell more cubes do get added, but at the same exact coordinates, so it looks like one only ever gets added. So why is this the case?

The node problem

When you launch an ARKit app, it creates an initial node at position (0, 0, 0). This is the position in the augmented reality world where your iPhone/iPad was at when you launched the app. You can see this by launching the app we made, moving to the side then adding a cube. You will notice it only gets added 20cm in front of the location you launched the app.

Visualizing the node problem

To visualize the node problem in AR, add the following function to generate a random number:

    func randomFloat(min: Float, max: Float) -> Float {
        return (Float(arc4random()) / 0xFFFFFFFF) * (max - min) + min
    }

Now we are going to generate a random number between -2 and -0.2. Replace -0.2 to be the position of the random number we generated (So the cube will be in a random position) as follows:

    @IBAction func addCube(_ sender: Any) {
        let cZ = randomFloat(min: -2, max: -0.2)
        let cubeNode = SCNNode(geometry: SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0))
        cubeNode.position = SCNVector3(0, 0, cZ)
        sceneView.scene.rootNode.addChildNode(cubeNode)
    }

Now run the app – you will notice when you place the cube, it gets placed in a random location from the position we launched the app at. So with this we can see the Cube position is based off the initial “node” that is created when an ARKit app is launched.

Solving the node problem – use the camera’s position

To solve this we can add the following function to get the camera’s position, and also create the structure to store it in a variable that contains everything we need.

    struct myCameraCoordinates {
        var x = Float()
        var y = Float()
        var z = Float()
    }
    
    func getCameraCoordinates(sceneView: ARSCNView) -> myCameraCoordinates {
        let cameraTransform = sceneView.session.currentFrame?.camera.transform
        let cameraCoordinates = MDLTransform(matrix: cameraTransform!)
        
        var cc = myCameraCoordinates()
        cc.x = cameraCoordinates.translation.x
        cc.y = cameraCoordinates.translation.y
        cc.z = cameraCoordinates.translation.z
        
        return cc
    }

Then to use this we replace the position code in the addCup function as follows:

    @IBAction func addCube(_ sender: Any) {    
       	let cubeNode = SCNNode(geometry: SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0))
        let cc = getCameraCoordinates(sceneView: sceneView)
        cubeNode.position = SCNVector3(cc.x, cc.y, cc.z)  
        sceneView.scene.rootNode.addChildNode(cubeNode)
    }

Adding a cup

First of all download Models.scnassets and add it into your project. This contains 3D models of several objects such as:

  • Cup
  • Candle
  • Chair
  • Vase

In the addCup function change the code to be the following to add a cupe from our Models.scnassets:

    @IBAction func addCup(_ sender: Any) {
        let cupNode = SCNNode()
        
        let cc = getCameraCoordinates(sceneView: sceneView)
        cupNode.position = SCNVector3(cc.x, cc.y, cc.z)
        
        guard let virtualObjectScene = SCNScene(named: "cup.scn", inDirectory: "Models.scnassets/cup") else {
            return
        }
        
        let wrapperNode = SCNNode()
        for child in virtualObjectScene.rootNode.childNodes {
            child.geometry?.firstMaterial?.lightingModel = .physicallyBased
            wrapperNode.addChildNode(child)
        }
        cupNode.addChildNode(wrapperNode)
        
        sceneView.scene.rootNode.addChildNode(cupNode)
    }

Now run your app and you can add a cup to it! Keep in mind when you start adding large number of objects they may not appear or the app may run slow as ARKit is still quite taxing on performance.

In a nutshell what this code does is first of all loads our cup.scn object. It then loaded into a “wrapperNode” the reason for this is our cup could have several nodes for different items. For example the plate, cup, spoon can all be different nodes which all form one object.

download source code

Tags: arkitcameravideo
1
Share

About Andrew

Andrew is a 24 year old from Sydney. He loves developing iOS apps and has done so for two years.

You also might be interested in

Swift ARKit 2D Objects Tutorial

Sep 2, 2017

In this tutorial we look at how to add 2D[...]

How to access the Camera and Photos in Swift.

Oct 3, 2016

What you will learn How to access the user’s camera[...]

Swift CoreML Image Recognition Tutorial

Jun 21, 2017

In this tutorial we take a look at the newly[...]

Welcome

Hi I am Andrew and welcome to Seemu Apps! Have a look around, we provide tutorials for primarily iOS apps.
Bluehost website hosting discount

Seemu’s Studio Setup

Blue Yeti Microphone
Rode Stand
Spider Shock Mount
Mac Keyboard Cover
Screenflow - recording software

Contact Us

We're currently offline. Send us an email and we'll get back to you, asap.

Send Message

Footer

:)

© 2025 · Your Website. Theme by HB-Themes.

  • Home
  • About
  • Our Apps
  • Learn
  • Contact
Prev Next