Developer experiences from the trenches

Developer experiences from the trenches

Start of a new post

Livecoding Side By Side Content in VR

Wed 06 July 2016 by Michael Labbe

Processing in BigScreen

One of the really cool things about social VR is that a little technical know-how can go along way. Recently, I have been exploring social VR through BigScreen VR, an app that lets you see your monitor in the context of a large virtual room, shared by others. Imagine a virtual LAN party and you have a good idea about what’s going on.

Quietly, they dropped an SBS mode into the “Beta Features” menu. All of a sudden, it is possible to socially livecode 3D content in a VR world. And, it’s possible to do this with a top-to-bottom free software stack. How cool is that?

What is Side-By-Side (SBS) content? Check out this Youtube video to immediately get a sense of what is going on. In SBS videos, the image for each eye is encoded using half of the video framebuffer, with a small horizontal offset in the camera position. If you watch this in a side by side video app, each eye only sees the half of the video. Your brain is tricked into thinking it is seeing 3D objects. Leaning side to side even triggers a sense of parallax.

Because BigScreen shares your screen, you can develop software and test it right inside your VR headset. Enter Processing, a free visual coding environment with support for 3D rendering and offscreen framebuffers.

In order to render SBS content in Processing, you simply have to render to two offscreen framebuffers, offsetting them from each other.

Here is sample code with comments that demonstrates exactly how this works. You can copy and paste this into a Processing 3 IDE window and just run it.

If you are simply curious, this YouTube render shows what the Processing sketch looks like.

Start of a new post

Git-Svn Considered Harmful

Sun 31 May 2015 by Michael Labbe
tags code 

Git-svn is the bridge between Git and SVN. It is more dangerous than descending a ladder that into a pitch black bottomless pit. With the ladder, you would use the tactile response of your foot hitting thin air as a prompt to stop descending. With Git-Svn, you just sort of slip into the pit, and short of being Batman, you’re not getting back up and out.

There are plenty of pages on the Internet talking about how to use Git-svn, but not a lot of them explain when you should avoid it. Here are the major gotchas:

Be extremely careful when further cloning your git-svn repo

If you clone your Git-svn repo, say to another machine, know that it will be hopelessly out of sync once you run git svn dcommit. Running dcommit reorders all of the changes in the git repo, rewriting hashes in the process.

When pushing or pulling changes from the clone, Git will not be able to match hashes. This warning saves you from a late-stage manual re-commit of all your changes from the cloned machine.

Git rebase is destructive, and you’re going to use it.

Rebasing is systematic cherry picking. All of your pending changes are reapplied to the head of the git repo. In real world scenarios, this creates conflicts which must be resolved by manually merging.

Any time there is a manual merge, the integrity of the codebase is subject to the accuracy of your merge. People make mistakes — conflict resolution can bring bugs and make teams need to re-test the integrity of the build.

svn dcommit fails if changes conflict

This might seem obvious, but think of this in context with the previous admonishment. If developers are committing to SVN as you perform time consuming rebases, you are racing to finish a rebase so you can commit before you are out of date.

Getting in a rebase, dcommit, fail, rebase loop is a risk. Don’t hold on to too many changes, as continuously rebasing calls on you to manually re-merge.

When Git-SVN Works

Here are a handful of scenarios where git-svn comes in handy and sidesteps these problems:

Start of a new post

100m People in VR is not the Goal

Wed 07 January 2015 by Michael Labbe
tags biz 

One hundred million people in VR at the same time isn’t a goal — it is the starting point.

The real goal is obtaining the inherent user lock-in related to hosting the most accurate simulation of materials, humans, goods and environments available. As we approach perfecting simulation of aspects of the real world, things that we used to do in the real world will now make sense to do in a VR context instead.

This upward trend will intersect another: real raw materials are increasingly strained, forcing the cost of production for goods beyond the access of the middle class. For activities that can be well simulated in VR but require hard to access materials to do in the real world, VR becomes a reasonable substitute.

Early VR 2.0 pioneers have talked about the opportunity to create a game that caused a mini-revolution like Doom did. That’s small potatoes. We are talking about a platform with control of artificial scarcity with the ability to make copies of goods for fractions of a penny. The winners in this game are the controllers of this exclusive simulation.

In this environment, Ferrari’s most valuable assets are going to be its trademarks many times over anything else it holds.

Dominoes will fall. VR headsets are the leaping off point, but not the whole picture — haptic controls, motion sensors, binaural audio interfaces, the list goes on. As each improves, more activities will make sense to perform in a virtual environment.

Looking back twenty years, referring to VR as a headset is going to seem trite. The world is changing from a place that “has Internet” to a place that is the Internet, and for the first time, you only have to extrapolate the fidelity of current simulation technologies to see it.

Start of a new post

How I Still Love Computing

Mon 05 January 2015 by Michael Labbe
tags rant 

I enjoyed consumer computing in the early eighties. I started programming on a Commodore 64 at a single digit age. Even with only 64KB of RAM, I was hooked. Too young to afford the raw resources necessary for creating physical things, I was able to sit at a computer and build worlds. By five I had written a racing game in BASIC. By six, I had made a game called Ninja which was my love letter to 80s ninjas, replete with cutscenes. (Yeah, I had played NES Ninja Gaiden by this point).

For me, computers have always been about making things, and not just games. Using them as a base for invention, learning and experimentation is incredibly enriching and rewarding. This, my personal approach to computing, has become increasingly dissonant with the direction of OS X and Windows. Jumping between the two as I shipped a commercial project, I became increasingly held back with a focus on closed-ecosystem communication and link sharing rather than creative and technical productivity.

When you run third party software to replace every meaningful piece of user-facing functionality that ships with the OS, it is time to take stock of how misaligned your creative endeavors are with the direction of your OS vendor. Stepping back, I realized I don’t even like where commercial operating systems are going anymore.

Linux, in 2015, is still difficult to set up correctly. EFI BIOSes, binary NVidia drivers, webcams that work in a degraded state, sound systems that are deficient in execution. If you love computing, these are all worth overcoming.

Once you have made the (oft-frustrating) investment of smoothing your way towards an actual, working Linux desktop, there may actually be a lot less friction to getting real work done. Gone are the mandatory workspace transitions animations, the need to convert vcxproj files to get libraries to compile and printer driver toast popups. Even if there isn’t a true sense of control (you can’t or won’t really influence most open source projects in reality), the people who do have that control haven’t chosen default settings that represent a bias in opposition to your interests. I enjoy Linux Mint’s defaults the most, by the way. Thanks to Dan Leslie for this recommendation.

It won’t be a small investment. Open source is the land of time consuming false starts. There are many software projects which claim to be finished but lack serious features. There are legions of bad alternatives, and it costs real time and money to investigate each one of them to figure out what is a complete, sane and functional choice.

A solid example of this is Linux C/C++ debugging. There are plenty of GUI wrappers for GDB out there but most of them are horrible and the ones that aren’t have considerable trade offs that may grate against your personal preferences. I had the fortune of attending Steam Dev Days last year, where Bruce Dawson introduced me to QTCreator. (Youtube, ppt) I have been using it for basic logical debugging on a 300kloc C++ codebase for a year and I haven’t gone crazy, yet.

Intent is an important, loaded word. We need to correctly apply the term “user hostile” to software that does something that a user ostensibly should not want. Shoehorning a full screen tablet experience into an OS built on APIs that clearly have not been dogfooded at scale by the vendor is user hostile. Putting social tracking cookies via a share button on a webpage is user hostile. Forcing your OS to have no browser competition through corporate policy and application signing is user hostile.

It is important to know the difference between user hostile software and bad software. Bad software may become better — it needs more time, more users to provide feedback, a better process, more experienced developers. User hostile software will not get better. It directs resources based on values that ensure a future of friction for creative and productive people.

I have seen talented, experienced developers publicly give up on switching to Linux, lashing out against bad software that is damping their forward motion. It is frustrating to hit a snag and it is a good idea to warn people against expensive blind alleys in open source. However, I wonder if these people, who often have similar creative origins to mine, would still have the tenacity to re-amass the experience necessary to contribute at the level they currently are. Building an alternative workspace that empowers your trade is hardly as difficult as becoming an experienced developer in the first place. Patience borne of the love of the craft must be plied to building tools we can all rely upon.

People who care about the future of computing will benefit in the long term by warning against user hostile decisions companies make and instead, make an effort to use to software that aims to serve them by working through the frustrating blind alleys and false starts. Software simply doesn’t get better without users. Usage is contribution.

On on positive note, I recently rediscovered the Raspberry Pi and Arduino communities and some of the great projects that have been attempted. A lot of the spirit of the early days of creative computing is couched in modern hardware and software hacking and the “maker” community. As an exercise, you can Google a cool project idea, append it with Raspberry Pi and usually find someone who has experimented with it. This is a tremendously fun rabbit hole.

Fun!

« Page 2 / 6 »

rss
We built Frogtoss Labs for creative developers and gamers. We give back to the community by sharing designs, code and tools, while telling the story about ongoing independent game development at Frogtoss.