23 4 / 2013
TraceGL works by instrumenting all of your code so it knows when calls took place, and all of the boolean logic that determined which code path to take. Then it visualizes all of this, using WebGL for performance, showing you a high level overview called the “mini map” in the top left, a log of function calls in the top right, the call stack in the bottom left, and finally the code for the function in the bottom right.
As your code runs, TraceGL visualizes all of this data in real time. The mini map is useful to see the ebbs and flows of the code, i.e. where the stack gets deeper and shallower again. In this way, you can see where events are being processed, like mouse or keyboard events in the browser, or HTTP requests in a Node.js application, and then get to a section of the potentially very long call stack very quickly. TraceGL even works over asynchronous events, unlike most step debuggers, which means that these operations are still shown as part of a single call stack under their originating calls, rather than as separate events.
Here is a video showing TraceGL in use:
TraceGL can instrument both browser based and Node.js applications, and integrates with various editors so that double clicking a line can open your favorite editor. An interesting aspect of the UI is that it is written entirely using WebGL, apparently for performance reasons. Of course, all of the text rendering (most of the UI) must have been done in a 2d canvas and then uploaded to WebGL as a texture since WebGL has no native text rendering capabilities, but clever rendering tricks like only re-rendering what has changed can make things fast. And once the textures are on the GPU, moving them around, scaling them, etc. using shaders is very fast.
I think we’re probably going to see more and more WebGL user interfaces soon. We’ve seen a lot of 3D stuff written on top of WebGL, and it is certainly good for that, but I’m betting that normal 2D user interfaces on the web will start being written with it too, just thanks to its great performance characteristics. HTML and CSS is great for documents and applications, to a point, but for web apps to compete with native on performance, hardware accelerated UIs on top of WebGL will be important.
Of course, building user interfaces using WebGL means that any text rendering that is done won’t be selectable, copyable, or accessible to screen readers without lots of additional work, so I can see frameworks being developed to facilitate this. I’ve already been working on and off on something similar to Apple’s Core Animation framework on top of WebGL (not public yet), and other interesting 2D frameworks like Pixi.js have been released recently. Especially with WebGL’s likely support in Internet Explorer 11, I think the age of WebGL user interfaces is upon us, and it’s exciting!
You can check out TraceGL on their website. It costs $15 to buy, but not all good tools are free and it’s nice to support good developers, so give it a shot and let me know what you think in the comments!
03 4 / 2013
John’s article talks about some of the use cases for asm.js, some of the common misconceptions about it, and finally includes a question and answer section with Mozilla’s compiler engineer David Herman, who is one of the authors of the asm.js specification. It’s definitely a good read, so check it out!
I think asm.js will be really important over the coming months and years, and I’m excited to see other browser vendors already getting on board. I got even more excited about it when I saw Mozilla and Epic Games’ demo showing the Unreal Engine running in the browser at very good performance, thanks to Emscripten and asm.js last week.
07 3 / 2013
The Leap Motion is a very cool piece of technology. It’s a small $80 box that you can put on your desk to control an ordinary computer using hand motions. It’s extremely accurate, allowing for very fine motor control using all of your fingers. If you haven’t seen it, be sure to check it out. I’m definitely getting in line for one to play with myself.
I have no doubt that there will be many more amazing demos like this as developers get their hands on the Leap Motion, which is slated to ship in May of this year for $80. If you have a great idea, you may even be able to get early developer access by signing up on their site.
05 3 / 2013
Thanks to the dcraw C library, Rawson.js enables viewing of RAW images from over 500 different cameras and many file types. The thing about RAW files is that each manufacturer has their own file format, and libraries and viewers must be updated when new cameras are released. This is why you will sometimes see operating system updates and other program updates for RAW image support ever now and again.
Rawson.js is currently a bit slow, especially for large images. The image shown in the screenshot above is almost 30MB and it took about 30 seconds to render on my system. I have found it to be a lot faster in Firefox Nightly thanks to asm.js I think, so try it there if you can. It sounds like they are perhaps working on a smarter renderer for the next version of Rawson, that doesn’t have to render the whole image at 100% quality before displaying anything. My suggestion is to only render the visible parts, perhaps in parallel using Web Workers (which they should use in any case) in tiles if possible. They’ve also mentioned using the embedded JPEG previews in many RAW files to speed up initial rendering while the real thing is processed.
Once it is sped up a bit using some smarter rendering techniques, I think Rawson will be very important for browser based photo editing applications, since many professionals only work with RAW files for their quality. You can check out the source on Github, the project page, and the demo.
26 2 / 2013
Parallel.js allows you to spawn a worker containing one or more functions, defined in your parent thread rather than as a separate file. You just pass the function names and some arguments to call the function with, and Parallel.js will spawn a worker thread, run the function with the passed arguments, and send the result back to the main thread asynchronously using a Promise based API.
There is also a MapReduce API for processing large datasets that will split the data into chunks to be processed in parallel by an arbitrary number of worker threads. When all of the processing is complete, the result is merged back together and returned to the main thread.
As web applications do more of their processing on the client side, we will need to take advantage of the multicore machines that modern systems have in order to maintain a good user experience. As a rule of thumb, we never want to do a lot of processing on the main UI thread since the user will notice the lag. Web Workers are great, but in general quite difficult to use. I’m glad to see libraries beginning to make things easier in this department.
19 2 / 2013
Like PeerJS, Holla has both a client and a Node.js server component. The server helps broker the peer-to-peer calls between clients with usernames. Once usernames are registered, the clients can make voice or video call requests as well as send chat messages to other client usernames. On the other end, the second client can either answer or decline the call, and then send the video stream to a <video> element to be displayed. The API looks really simple to use, and thanks to WebRTC will make pretty much anyone able to build their own Skype in a couple minutes.
The demo allows you to declare your username and then call and chat with other usernames, and shows the power of the API to do a lot with a very small amount of code. It’s a simple demo, and it only works in Chrome and Firefox Aurora so far, I think, but I’m looking forward to future widespread adoption of WebRTC. You can see the code for the demo on Github to get a feel for the niceness of the API, and then look at the rest of the code for the client and server to see what it’s doing for you.
18 2 / 2013
You opt-in to using asm.js by including the
"use asm"; string at the top of your file or individual function, just like you opt into strict mode with
+a would annotate the a variable as a double.
a|0 would annotate it as an integer.
If all of this sounds a bit tedious to write, I would agree, although it’s far from the worst syntax we could have. However, it isn’t really targeted at human authors, but compilers like Emscripten, Mandreel, LLJS, or perhaps even TypeScript which can generate the cleverly backwards compatible but not terribly clear type annotations for you from another existing language like C, or a new language like LLJS or TypeScript. Emscripten already generates valid asm.js output and was one of the main impetuses behind the the project, and Firefox will be landing their asm.js optimizing compiler in the near future. The benchmarks look very impressive indeed.
Be sure to check out the asm.js spec, David Herman’s prototype asm.js validator on Github (written in JS!), and Emscripten developer Alon Zakai’s presentation about Emscripten, asm.js and the future. I’m looking forward to watching all of these projects as they develop!
15 2 / 2013
All of the rendering of these apps is done in one large canvas element, which you’d think might be slow, but it turns out that at least for all of the demos I tried, it’s actually really fast and responsive. Of course, Qt has its own image generation routines built in so it’s using the canvas as basically a way to blit pixels to the screen and nothing more. It’s pretty much the only way this could possibly work, without rewriting large parts of Qt itself, and it turns out that it works pretty well. They have an “experimental renderer” that you can enable, that is supposedly faster but I haven’t seen much of a difference.
Let me just make this clear: the only canvas method emscripten-qt is using for rendering is putImageData. Nothing else. All text, paths, gradients, and all UI components are all rendered by Qt to pixels before even making it to the canvas. And it’s still pretty darn fast and smooth. I think this is both a confirmation that entirely canvas based UI toolkits could be possible as well as a showcase for the power of Emscripten to make use of already written and optimized code on the web.
14 2 / 2013
I just caught wind of PeerJS, a project that makes peer to peer networking using the new WebRTC browser APIs easier. WebRTC is extremely cutting edge and the library currently only works in Chrome Canary and Dev Channel, so take this with a grain of salt but it is exciting to see cutting edge libraries like this.
PeerJS actually consists of two parts: the client side script that communicates with other clients using WebRTC, and a Node.js server component that brokers the connections between the clients. The server keeps track of each client that is currently online so that the clients can become aware of each other. Once the clients know about each other, they can connect directly thanks to WebRTC’s RTCPeerConnection API, which allows sending arbitrary data between clients with an API very similar to the WebSocket API. Both binary and textual data will be supported, but right now only text is working.
PeerJS wraps all of this up in a nice API, and they even provide a free server for you to use with an API key if you don’t want to run your own. They also handle all of the complexities of working with the WebRTC API including handshaking, temporary binary string encoding until browsers implement sending binary data directly, and of course the actual server brokering of connections. WebRTC handles the actual networking complexities for you, including NAT traversal, UDP and the actual peer-to-peer connections themselves.
The API looks really easy to use. You can check out a peer to peer chat demo using PeerJS online, and its source on Github as well. However, as I mentioned at the top of this post, the library currently only works in the Canary and Dev versions of Chrome 26, and Firefox apparently doesn’t work yet. It is exciting to see WebRTC coming along, both in terms of live video and audio transmission as well as arbitrary data. WebRTC is a major undertaking for browser vendors, but I think it will create some great opportunities for browser based apps in the future.
One of the things that might be made possible with WebRTC and a library like PeerJS is a browser based BitTorrent client. Previously it has been pretty much impossible because the only protocols supported via JS have been WebSockets and HTTP. I’m not entirely familiar with the details of WebRTC, but it sounds like it will give developers much more networking flexibility. I’m not sure what the security restriction are. Same domain wouldn’t really apply here, so I guess it would probably just ask the user’s permission, as it should. Let me know if this I’m mistaken, but I think a BitTorrent client could be an feasible possibility.
You can check out PeerJS on Github, and find the documentation and demos on their website. I’m looking forward to seeing the future of WebRTC as it is implemented in browsers and demos and libraries start to make use of it!
08 2 / 2013
To use it, you just require the source-map-support module from npm, and like magic all of your stack traces will now contain correct line numbers. It uses V8’s handy Stack Trace API to capture errors and rewrite their stack traces just before they are thrown and printed out if not caught. It also uses Mozilla’s source-map module to do the actual source map parsing and mapping. That module can also be used for source map generation, and CoffeeScriptRedux does just that.
Anyway, if you have been itching for source map support in Node.js like I have, go check out source-map-support on Github and npm, and start debugging better. Source maps can solve 90% of the issues people have had with compile to JS languages, so I’m glad to see them arriving in both the browser as well as Node.js.