Developer experiences from the trenches
Git-svn is the bridge between Git and SVN. It is more dangerous than descending a ladder that into a pitch black bottomless pit. With the ladder, you would use the tactile response of your foot hitting thin air as a prompt to stop descending. With Git-Svn, you just sort of slip into the pit, and short of being Batman, you’re not getting back up and out.
There are plenty of pages on the Internet talking about how to use Git-svn, but not a lot of them explain when you should avoid it. Here are the major gotchas:
If you clone your Git-svn repo, say to another machine, know that it will be hopelessly out of sync once you run
git svn dcommit. Running
dcommit reorders all of the changes in the git repo, rewriting hashes in the process.
When pushing or pulling changes from the clone, Git will not be able to match hashes. This warning saves you from a late-stage manual re-commit of all your changes from the cloned machine.
Rebasing is systematic cherry picking. All of your pending changes are reapplied to the head of the git repo. In real world scenarios, this creates conflicts which must be resolved by manually merging.
Any time there is a manual merge, the integrity of the codebase is subject to the accuracy of your merge. People make mistakes — conflict resolution can bring bugs and make teams need to re-test the integrity of the build.
This might seem obvious, but think of this in context with the previous admonishment. If developers are committing to SVN as you perform time consuming rebases, you are racing to finish a rebase so you can commit before you are out of date.
Getting in a rebase, dcommit, fail, rebase loop is a risk. Don’t hold on to too many changes, as continuously rebasing calls on you to manually re-merge.
Here are a handful of scenarios where git-svn comes in handy and sidesteps these problems:
dcommitback to SVN.
One hundred million people in VR at the same time isn’t a goal — it is the starting point.
The real goal is obtaining the inherent user lock-in related to hosting the most accurate simulation of materials, humans, goods and environments available. As we approach perfecting simulation of aspects of the real world, things that we used to do in the real world will now make sense to do in a VR context instead.
This upward trend will intersect another: real raw materials are increasingly strained, forcing the cost of production for goods beyond the access of the middle class. For activities that can be well simulated in VR but require hard to access materials to do in the real world, VR becomes a reasonable substitute.
Early VR 2.0 pioneers have talked about the opportunity to create a game that caused a mini-revolution like Doom did. That’s small potatoes. We are talking about a platform with control of artificial scarcity with the ability to make copies of goods for fractions of a penny. The winners in this game are the controllers of this exclusive simulation.
In this environment, Ferrari’s most valuable assets are going to be its trademarks many times over anything else it holds.
Dominoes will fall. VR headsets are the leaping off point, but not the whole picture — haptic controls, motion sensors, binaural audio interfaces, the list goes on. As each improves, more activities will make sense to perform in a virtual environment.
Looking back twenty years, referring to VR as a headset is going to seem trite. The world is changing from a place that “has Internet” to a place that is the Internet, and for the first time, you only have to extrapolate the fidelity of current simulation technologies to see it.
I enjoyed consumer computing in the early eighties. I started programming on a Commodore 64 at a single digit age. Even with only 64KB of RAM, I was hooked. Too young to afford the raw resources necessary for creating physical things, I was able to sit at a computer and build worlds. By five I had written a racing game in BASIC. By six, I had made a game called Ninja which was my love letter to 80s ninjas, replete with cutscenes. (Yeah, I had played NES Ninja Gaiden by this point).
For me, computers have always been about making things, and not just games. Using them as a base for invention, learning and experimentation is incredibly enriching and rewarding. This, my personal approach to computing, has become increasingly dissonant with the direction of OS X and Windows. Jumping between the two as I shipped a commercial project, I became increasingly held back with a focus on closed-ecosystem communication and link sharing rather than creative and technical productivity.
When you run third party software to replace every meaningful piece of user-facing functionality that ships with the OS, it is time to take stock of how misaligned your creative endeavors are with the direction of your OS vendor. Stepping back, I realized I don’t even like where commercial operating systems are going anymore.
Linux, in 2015, is still difficult to set up correctly. EFI BIOSes, binary NVidia drivers, webcams that work in a degraded state, sound systems that are deficient in execution. If you love computing, these are all worth overcoming.
Once you have made the (oft-frustrating) investment of smoothing your way towards an actual, working Linux desktop, there may actually be a lot less friction to getting real work done. Gone are the mandatory workspace transitions animations, the need to convert vcxproj files to get libraries to compile and printer driver toast popups. Even if there isn’t a true sense of control (you can’t or won’t really influence most open source projects in reality), the people who do have that control haven’t chosen default settings that represent a bias in opposition to your interests. I enjoy Linux Mint’s defaults the most, by the way. Thanks to Dan Leslie for this recommendation.
It won’t be a small investment. Open source is the land of time consuming false starts. There are many software projects which claim to be finished but lack serious features. There are legions of bad alternatives, and it costs real time and money to investigate each one of them to figure out what is a complete, sane and functional choice.
A solid example of this is Linux C/C++ debugging. There are plenty of GUI wrappers for GDB out there but most of them are horrible and the ones that aren’t have considerable trade offs that may grate against your personal preferences. I had the fortune of attending Steam Dev Days last year, where Bruce Dawson introduced me to QTCreator. (Youtube, ppt) I have been using it for basic logical debugging on a 300kloc C++ codebase for a year and I haven’t gone crazy, yet.
Intent is an important, loaded word. We need to correctly apply the term “user hostile” to software that does something that a user ostensibly should not want. Shoehorning a full screen tablet experience into an OS built on APIs that clearly have not been dogfooded at scale by the vendor is user hostile. Putting social tracking cookies via a share button on a webpage is user hostile. Forcing your OS to have no browser competition through corporate policy and application signing is user hostile.
It is important to know the difference between user hostile software and bad software. Bad software may become better — it needs more time, more users to provide feedback, a better process, more experienced developers. User hostile software will not get better. It directs resources based on values that ensure a future of friction for creative and productive people.
I have seen talented, experienced developers publicly give up on switching to Linux, lashing out against bad software that is damping their forward motion. It is frustrating to hit a snag and it is a good idea to warn people against expensive blind alleys in open source. However, I wonder if these people, who often have similar creative origins to mine, would still have the tenacity to re-amass the experience necessary to contribute at the level they currently are. Building an alternative workspace that empowers your trade is hardly as difficult as becoming an experienced developer in the first place. Patience borne of the love of the craft must be plied to building tools we can all rely upon.
People who care about the future of computing will benefit in the long term by warning against user hostile decisions companies make and instead, make an effort to use to software that aims to serve them by working through the frustrating blind alleys and false starts. Software simply doesn’t get better without users. Usage is contribution.
On on positive note, I recently rediscovered the Raspberry Pi and Arduino communities and some of the great projects that have been attempted. A lot of the spirit of the early days of creative computing is couched in modern hardware and software hacking and the “maker” community. As an exercise, you can Google a cool project idea, append it with Raspberry Pi and usually find someone who has experimented with it. This is a tremendously fun rabbit hole.
I have been on the Internet for 19 years, and on BBSes for four more before that. This is the first year where I have found myself reflecting on trends and deciding that the Internet is poised to lead its users towards permanent entrenchment in negative experiences and outcomes. The Internet has jumped the shark. It can un-jump, but it won’t in 2015. That’s my prediction, and here’s what you can do about it for yourself.
Let’s assume, for starters, that you are using the Internet to enrich your life in some way: you are here because you want to learn something, meet someone, or make something and share or sell it to people. Sure, there are always cat pictures and all of that cheap entertainment, but that’s not what I’m talking about. I’m assuming you are interested in getting something valuable out of your time using the Internet.
The most disturbing trend we’ve seen on the Internet in 2014 is an uptick in the severity and frequency of harassment of minorities. Gamergate and its response are both bolstered by Twitter, a web service that acts as a leaderboard for mobs: retweets, follower counts, replies and favourites are tracked. Ideas that need more than 140 characters to express are scattered. This damages discourse. You could not ask for an architecture that enhances mob mentality over rational discourse any more than what Twitter provides.
Public places will never be safe places on the Internet. While that is no justification for harassment, complete safety cannot be a technical or customer service solution. If anyone can join and re-join, it is not possible to achieve complete safety. This is history repeating itself as anyone who spent time on IRC or on bulletin boards will attest. Unfortunately, this time the business models of the companies hosting these services depend on growth which can only be achieved through openness. It would appear from the actions of social companies that they believe closed communities do not scale VC-funded businesses or publicly traded companies.
Next up, we have the problem of overdependence on “the cloud” for consumers. Because many companies are primarily offering cloud-based solutions for your data, it has normalized the idea of cloud solutions for many people.
The consumer cloud is, in many cases, a tax on complacency — in some cases, quite literally a fee and in others, a sacrifice of privacy and control over your personal data. It costs more than the price of a 2TB hard drive — which could store many collections ten times over — for the permission to stream your music from Spotify for a single year. Once you’ve paid Adobe $120 a year to store your photos, that’s it — they have you on a plan because they have your photos. All of these services can be replaced by a couple of hard drives and a Synology product. Then you have complete privacy, and you are not at the whim of a huge company whose interests are at cross purposes with yours, at best.
If you told a photographer in 1990 that many of her peers would consider paying a company a yearly fee to hold her own photos in the future, she would look at you very oddly. How dystopic.
The last problem I want to address isn’t particularly new to 2014, but it isn’t getting any better, either. We get our information from social feeds now. Google famously ended Google Reader, asking us to get our news from Google+ instead. Sadly, people don’t use Facebook and Google+ to pass around truly useful information. Nine times out of ten, you get articles that amount to fodder for small talk and dinner conversations. This seriously degrades the utility of what you read on a daily basis.
I target these three problems because they are examples of companies routing us towards a less informative, more troubled Internet. They do it because they are operating at a cross purpose to our own: to scale their customer bases. I expect these assaults on meaningful Internet time will continue and increase in 2015.
You can take the Internet back for yourself. I recommend these three things:
Create closed communities; defend the ones you have already. If you run a company, take a stand. Create safe places for like-minded people and keep them that way. A chat window (Skype, Tox) with ten productive, smart people who contribute ideas and talk to each other is more rewarding than a stream of social updates running at cross purposes. Curate new friends from your Internet acquaintances.
Host your own data and services. If you have tech know-how, it’s satisfying and fun to take this back. It’s about as hard as setting up a router (read: easy) to buy and use a Synology product and Cloud Station. 10TB of Dropbox-like syncing with complete privacy and no monthly costs, anyone? Run a Subsonic server to stream music to your phone and stereo. Host your own photos for fractions of a penny per shot.
Fix your information diet. Start by blocking, hiding and un-following. Disable push notifications for almost all of your apps. If you use Gmail, use filters to suppress all unsolicited emails that make it in your inbox. Now that you’ve cleaned up, it’s time to get back to RSS. Use Feedly or similar. Build your list of blogs back up. Start blogging, yourself. Contribute useful information back.
It’s unrealistic to assume that these companies will change their ways. But you can make your time on the Internet that much more enriching and valuable. The great news is that we still have the computing freedom to sidestep these trends.