Thank you very much.
Hi, everyone.
It's so nice to be here.
Back here.
I'm so jet lagged, but I'm happy.
As he said, I'm a Google developer for Angular team and web technologies and my name is really,
really hard to pronounce and there's research.
I know who did that.
If people can't pronounce your name, they don't believe in you.
So, I changed my name on Twitter.
If you remember the first three letters, you are fine.
And I really enjoy putting dinosaurs around.
If I can't find them, then I can put augmented reality dinosaurs around.
So, AngularConnect is the right place for me this year.
Just wanted to give you an overview of what we are going to talk about today.
What is available, the web APIs that makes the XR possible.
WebXR, and geolocation.
Because the XR experience that we are going to create is depending on location.
And the other part of the this talk is about how to integrate the APIs into Angular and
creating real time applications.
And most importantly, creating reusable component libraries.
You can find my slides at bit.ly/webXR 2018 and find all of the code and much more than
the things I'm able to say today on my GitHub.
And the live demo will be available after that.
So, thank you for coming here and some of you might wonder why this is augmented reality
and web XR at the same time.
So, I just wanted to give you some overview.
So, we have reality with VR now.
And we have the virtual reality where everything is made up.
Everything is 3D.
Everything is rendered.
And augmented reality is something in between.
And the web APIs that we have were called WebVR.
But then they're changed to WebXR device APIs to enable augmented reality experiences on
the web as well.
This is reality.
If you are having a surgery, if you are going to get amputated or anything on that you have
two of this is what people do.
And still until the very last minute until you go into the surgery room, people will
ask you many, many times, like which side?
Which side it is?
I have been there twice and it's a horrible experience.
But recently in the United States the Food and Drug Administration approved Microsoft
Halo glasses which is the augmented reality glass.
What you have is real time information right in front of now just by wearing a glass.
Now when you go into surgery, people have so much more information than what you write
on your own body.
And virtual reality is also part of it.
So, this is a brain scan scan of your brain and then the surgeons can walk through in
your brain in every level.
And if they wanted to find something, they will find it there.
It's not just games and entertainment.
There's so many great applications that we can create with the WebXR.
And today is the time to create it.
Because right now there are hundreds of millions of devices that have the capability that we
are not taking full advantage of.
And relevant content at the right time in the right place is very, very important.
And lastly, web is very important to me.
Although our devices, Android and iPhones, are so much more capable when it comes to
creating 3D content.
But web is very important because it's available to everyone.
So, if you want to create something where do you go today?
WebXR is available on Chrome Canary.
And the rest of it is available under a few flags.
This is what you do.
You type in Chrome flags, WebXR, and then there's WebXR hit tests and you can enable
world virtual experiments 2 to be able to debug your application very well.
So, WebGL is very much in use everywhere and available in every browser.
But WebXR support is, unfortunately, not there.
And the other thing that we can do today is if you're using Chrome, one thing that you
can do is you can sign up for urgent trials.
And it allows every user that comes to your website to have this experience without having
to enable any of the flags which is a really cool thing.
And we are developing some WebXRs, and the immersive web is developing.
So, some of the capabilities that are not implemented by some of the browsers will be
available by using the polyfill.
And lastly, the thing that we can do, and we must do in any case is having a progressive
web experience.
If your user comes into your website and they don't have the device to be able to see the
3D content, they can just click around with the 3D and then use their mouse.
And if they have an XR device they can have the same experience in XR and then AR is the
other part of it.
And there's a lot of tools today out there.
Although everything is pretty new.
We still have a lot of open source resources and open source libraries.
And one of them here on top is A Frame.
It's a very easy to use declarative language and it's really, really nice.
But I really like this.
This is what we are going to use today.
And on the left hand side here is an open source polygon model library.
You can take any of the models for free and use it.
And it has a great API.
So, why am I using ThreeJS?
It's flexible.
A Frame, although it's easy to use and really nice.
But it doesn't give you all the that much flexibility or performance.
And also, I think it's really, really fun.
So, let's write some pseudo code.
What we need when we have the Angular component that we wanted to create for an AR experience
is that first we check if the AR is available on the device of the user.
And once it is once we know that it's available, entering an AR is going to be an experience.
You are going to either touch somewhere or click a button to start the experience.
Once the AR is started, we start a session on the device.
And in every movement of the user we re render the theme that's behind.
So, what we have is the video stream of the user, what they see.
And then we are rendering on top of it the 3D content.
And first thing we check how the device is available, the WebXR is available or not,
is by checking those flags that we enabled at the very beginning.
For example, the request in JS.
And what we render is a canvas with the video stream.
And canvas can have a multiple contact, 2D or 3D.
And in this case, we're going to check if there's XR present.
We are going to start our session with this context.
And then once the session is available, we are going to start rendering.
So, WebGL is the 3D graphics library that's available in the browser.
And we need a web GL render it's something like taking a video or photograph of the 3D
thing that you are creating.
But we need to make this rendered to be transparent, to have a opacity in the background so we
can render the real world in the background.
And we can enable any time we put a model into our world, we can add some shadows or
lighting to have extra reality.
And once we put our context, we still have to check if the particular device that you
are your user has is is comparable with the session that you are trying to create.
Once we have that we can add a theme.
Well, we can start our theme and then add any objects to our scene.
What I mean by scene is you can think of it as a stage.
Everything you have, every light and every object is going to be held in this object,
which is a container object, basically.
And how many of you have an animation frame?
No animators.
So, the case animation frame is available to us in regular JavaScript.
But what we have here is specific to XR session.
The sessions request animation frame have access to the pose of the camera and also
tries to render really fast, 60 frames per second.
So, what I didn't include here is creating the 3D geometry and the material.
But my first talk here, this is my fourth, was about WebGL in general.
Not WebXR.
So, you can watch that on the AngularConnect channel.
But one thing I really wanted to give you is that I found a really cool resource that
explains how to create more reality by using material and texture maps.
So, we are really concerned about the data that we are sending.
We don't want to have a very detailed model that we have to re render constantly with
every move of the user.
To enable us to have a little bit more sleek experience, we can bring more reality by using
textures or lighting and material.
And create resources on those links.
And I mentioned we are going to use Polly.
So, I can I want to show you, but that's not possible.
Polly has lots of models.
Everyone is free to add their own models.
But there are some professional models there too if I Google.
And it has a battery API.
Not only can you download the models, but you can also just make an HTTP request to
get the models.
And you can even have a little search box on your application, so people can search
for any model, maybe dinosaurs, in this case.
And put whatever thing that they wanted to put into that location.
To be able to use this API you have to have an API key from the Polly.
And very easy and you have that.
And once you have that model, you have to add it to your scene.
And there's a lot of there's different formats of the models.
And here I'm checking the format is what I'm looking for because they're with different
formats you have to have different loaders.
And if we how we do the user interaction.
Have you guys ever used any VR devices?
If you are using a very low end VR device, what you have is you don't have the headset,
for example, the Google cardboard doesn't have anything.
You just pop it in the cardboard.
And what you use instead of clicking or a hand device is your vision.
So, what you do is wherever you turn you test the light towards an object and that becomes
your selection.
Same mentality is available also for AR.
Instead of where you're looking at because we don't have the head position with AR.
What we do is where you're clicking is from the points that you're clicking, we are going
to be passes them right.
And once we do that, we are looking for some hits all the through like we are looking through
everywhere that light goes through.
And see if any object is being hit.
And that becomes our selection.
And there are other cool things that you could do when it comes to interactivity.
Is by using a great library of resonance spatial audio that allows you to create audio that
sounds very different and that comes from different places.
For example, you can attach a some kind of noise to a dinosaur.
I don't know that.
And put it in different places.
And if you are wearing a headset, the sounds are coming from different places and you can
use that to create great experiences.
The rest of the code, there's just a lot of details.
And I didn't want to include it.
And this is pretty much the overview and overall.
But you can find all of the details online.
And let's talk about Angular.
Creating UI libraries is too much work he said one day.
He said if you were creating a reusable component library, you have to create another repo and
then you have to package it and do back to your project and reinstall it, reuse it.
And this is a really horrible thing to do.
And that's why we get repeating and copying and pasting the same code in between different
projects all the time.
But luckily, at Narwhal, they created an extension on top of Angular CLI and very lightweight.
And what it does is it allows you to create the usable libraries and makes a few other
things very easy.
And not only that, they created Angular console which is a UI that makes it really, really
easy to use.
This is Angular console, and I already have the app.
And if I click on it, I have a bunch of options here.
And these are the things that are commonly being used.
So, these are some of my libraries.
And usually I'm either serving my applications or creating a new component.
So, these are the most common things.
There's also another which you can go through all the Angular's materials, schema or Angular's
CLIs and choose any of the any of the options without having to remember all of the arguments
to it.
So, let's create a new library.
We'll call it AngularConnect.
Once you name your library, you can either define a directory.
And then you can make it publishable very easily.
So, this is creating a library in the same repository with the application.
But you can easily export it and use it somewhere else.
And we have a few other options.
And these are all of the options that usually come with the CLI.
And once we clicked, we generate a new library and with the same way you can create a component
and service.
And the cool part about it is that you don't actually have to remember any of the when
you have the job done.
I believe.
If you like it.
So, what happens when you have these libraries?
And another person, John, said creating UI libraries creates lots of extra code.
Because if you're starting to a project and then you start to organize your code so much
people tend to think that it's just too much work and not sure if it's worth it or really
complicating things.
And there's lots of UI libraries out there that you can use and why do I create my own
UI library?
And even though you have your you are using something like Angular retrieval, for example,
to create a data table, you still have to implement the sorting, filtering, headers
or any of the things.
And in John's case, he created 12 of them.
So, at the end these things add up.
And over time when you wanted to change anything in your application, then you have 12 of these
things that you have to change and manage and be re created all the time.
With and you can do this in any other way too.
But this is the way I have been doing.
You can create different applications.
And in this case, demo or this demo and there are the two applications that I have.
And they're sharing some of the libraries and they have their own library own separate
libraries.
So, what we have here is a UI component and UI shell.
But I created an extra specific to XR because I can use for both augmented reality and virtual
reality.
And the feature library is where I have I have specific features with smart components
which has data and the statement.
So, how do we create these components to make sure that they're reusable all around in all
of our applications?
We use all of our access and XR options.
And this was already available to us using AngularJS with directive.
We had these options.
But it's so much more intuitive now and very easy.
We can give default values and any time we have to change anything, all we have to do
is add to these options and we don't have to break any other code.
And the data that we have in this case is going to be location data that's asynchronously
going to be in my feature library which has the smart components with this data in it.
And I can pass the data with the view components.
But one thing that happens most of the sometimes, sometimes we do
we do use async many times in many didn't places in the same component.
And that's subscribe to the data source that we are using.
And sometimes makes multiple code.
So, that's something to be mindful of.
And the other thing is that we are using here for this application is Firebase.
This is one of the newer applications.
And what it allows you to do is to save your data with a geolocation attached to it.
And when you need it to query those data, what you can do is give a range, a radius.
And then it will calculate to your point anything that's around you.
Pretty much what Google Maps is doing, but for your database.
So, let's see what it does.
And this is Firebase's geolocation.
What it does is the geofire is special different than the other storage, it creates a different
data type which you can query with that.
One very unfortunate thing with you will think for WebXR is having to debug your application.
It's kind of painful and hopefully will get a little bit better in the future.
What you do is if you haven't seen it in your development tools, you can go to more tools
and then go to remove devices and it will give you all the devices that are connected.
And you have to also enable the options on your device.
And once you have that, you can enter any URL and reload from here and inspect.
So, I'm going to start my session.
Oh.
Okay.
With a we are doing here is looking for unfortunately my cable is not so far away.
I can't move that much.
But what we are doing is looking for smooth surfaces to render our
hopefully you tried it after the party and put some dinosaurs around.
And the dinosaurs are attached with the geolocation information.
And wherever you are, we request the same data from the geofire and then render the
dinosaurs.
Where to go from here?
The WebXR API is being discussed at the moment.
And you can be part of it.
You can go to immersive web on GitHub and create an issue or comment on the issue.
And if you're creating something and these APIs are being created for you, you should
be part of the conversation.
There are a few blog posts and a few other resources here.
And I have other demos if you wanted to check it out for AR and VR.
And thank you so much.
No comments:
Post a Comment