Portable Remote Desktop with GearVR

I love coding and traveling. Naturally, I wanted to explore how to integrate code deeper into my travel experience, be it in city or around the world.

Thanks to the recent advancements on the GearVR front, this is now possible with a bit of nerdery.

Goals

Assembly Goods

Hardware:

Software:

Bags:

Tunes (for inspirado):

Laptop Setup

  1. Use the USB Wifi adapter to host a Wifi Hotspot with Virtual DNS

    We want to host a dedicated Wifi connection to establish a channel of communication between the GearVR and laptop.

    This will assure low/consistent latency when streaming the desktop Environment into the GearVR with VNC the protocol.

    Using the create_ap tool, create a HotSpot with internet sharing with cascading Wifi interfaces:

    
     	sudo create_ap wlp0s20u1 wlp2s0b1 "vr_wifi_hotspot" "my_password" &
     	

    This will allow the laptop to maintain a dedicated native connection to standard Wifi networks and the connection will be shared with our new subnetwork.

    Additionally, it allows the subnetwork to work in isolation, so we can still jam offline (see goals)

    Run ifconfig to get the IP associated with your laptop on the new software subnetwork:

    
     	wlp0s20u1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
     	       inet 192.168.12.1  netmask 255.255.255.0  broadcast 192.168.12.255
     	       inet6 fe80::76da:38ff:fe59:d034  prefixlen 64  scopeid 0x20<link>
     	       ether 74:da:38:59:d0:34  txqueuelen 1000  (Ethernet)
     	       RX packets 0  bytes 0 (0.0 B)
     	       RX errors 0  dropped 0  overruns 0  frame 0
     	       TX packets 33  bytes 6962 (6.7 KiB)
     	       TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
     	
     	wlp2s0b1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
     	       inet 10.236.188.250  netmask 255.224.0.0  broadcast 10.255.255.255
     	       inet6 fe80::9694:26ff:fe04:de6e  prefixlen 64  scopeid 0x20<link>
     	       ether 94:94:26:04:de:6e  txqueuelen 1000  (Ethernet)
     	       RX packets 57433  bytes 55012835 (52.4 MiB)
     	       RX errors 0  dropped 0  overruns 0  frame 0
     	       TX packets 52602  bytes 9910754 (9.4 MiB)
     	     TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
     	
  2. Enable VNC streaming from laptop display

    Broadcast VNC on the primary X display for consumption on subnet by the GearVR

    
     	x0vncserver -display :0 -passwordfile ~/.vnc/passwd
     	

    Recommended to provide a VNC password but this could be argued as unnecessary if you protect the AP HotSpot well.

  3. Pair Bluetooth Devices

    For sure, you will want to pair a bluetooth keyboard since VR Keyboard layouts stink.

    The bluetoothCtl utility works well for this in Arch; use what makes sense for this.

    Optionally pair a bluetooth mouse, but I encourage you to use the directive-focus pointer provided in VNC Remote Desktop Client that works well with GearVR’s headset touch sensor.

  4. Disable Laptop Lid Shut Suspend

    After we have our desktop broadcasting over a local HotSpot, we want to tuck the laptop out-of-site and keep it rolling.

    This can be achieved by altering the login.d settings in Arch. Just check your OS power management settings.

    Additionally, I always set an idle time to avoid endless laptop uptime after ceasing GearVR engagement.

    
     	# /etc/systemd/logind
     	HandleLidSwitch=ignore
     	IdleAction=suspend
     	IdleActionSec=5min
     	
  5. Hide laptop in thin nerd-bag

    Wifi range is means you can drop your laptop in a bag and tuck under a seat.

    I have even had the actual laptop in the trunk of an uber and still had really wonderfully low VNC latency.

    Warning: increased latency if using VR in/near nuclear reactors

    Recommended fashionable approach is to purchase a Built NY Laptop Vest and put on a nice JCrew Flannel (noone expects lumber jacks to have tech)

    Apps

    Apps

Phone Setup

  1. Disable the GearVR Application Service

    GearVR has a default sevice running that allows activation/suspend of the device based on the proximity sensor in the visor.

    Install Samsung Package Disabler Pro and disable service Gear VR Service (com.samsung.android.hmt.vrsvc)

    GearVR Service Disable

    This will allow you to run the stereoscopic apps (Cardboard included) without the GearVR OS kickin’

  2. Connect to the Virtual HotSpot (AP) hosted from laptop

    Use the standard Wifi interface on the Galaxy to connect to your laptop hosted Virtual HotSpot.

    This will assign a software managed subnet address that will be in the same range as your inner-network laptop address.

    From this connection, we have a dedicated safe channel with consistent through-put between the GearVR and laptop.

    Virtual HotSpot

  3. Setup the VNC client

    Install the VNC Remote Desktop Client and configure the folowing in settings:

    • Orientation Provider=Google Cardboard
    • Inter-lens distance=60mm
    • Screen to lens distance=40mm
    • Vertical distance to lense center=30mm

    VNC Setup

  4. Enable video pass-through

    I recommend enabling Camera preview so you can feed your ego by watching the crowd respond to your VR headset’s glean.

    • Camera Preview=On
    • Select size of the Camera preview=100%

    VNC Setup

  5. Opt for VR managed cursor

    If you decided to pair your own mouse, skip this.

    Otherwise, VNC Viewer adheres to the common UX of directed-focus selection. That is, you look at something you want to engage with and use the tap sensor to select.

    • Enable local cursor=On

    The Caveats here are lack of Drag-selection and right-click. Both possible with keyboard configurations in X.

  6. Connect to Laptop VNC Server

    Add the laptop VNC server into the VNC Viewer app by using the virtual HotSpot Subnet address to the laptop. Remember, we don’t want this communication to go over external routers since we can’t assure consistent latency.

    • Viewer Mode=Off

    You can optionally provide the VNC password during this setup.

    VNC Setup
    VNC Setup

VICTORY

Screenshot on GearVR

This effectively creates a low-latency, 100% portable Virtual Desktop that can accelerate to public Wifi networks as you travel.

Using a keyboard driven desktop manager (i3) you can easily earn elite hipster status while slamming heavy work tasks (recompiling Clang for LLVM )

IMO, this is even better than a tethered Rift DK2 experience because it is wireless.

Confessions of a Software Architect

This isn’t a software tutorial, or a starter for the enthusiast developer.

This is a confession of a sort of ‘love-sickness’ that comes over a man at the hand of a great craft. A craft hyper-evolving, as mysterious as the ‘self’ within, that teases a developer into pseudo-immortality beneath its algorithmic harmonies.

This story is told through my experiences in time, where all representations, all beautiful outlifts of the mind, take root in the flesh and dirt. A message of people, progress and synthesis between seemingly unreachable ends.

Consider it, with this insight, merely a serious contemplation of software as high art from the voice of the artist, which is meant for curious individuals to digest, to carry forward as an emergent manifesto.

Be warned, however, that software is a discipline along the spiral’s-arc, ever widening. Traversing its body breeds fundamental uncertainty in the nature of the self in time amongst people and challenges progress. The human impact is significant, in some cases destroying.

Enter at your own risk.

Envelope Egotism

At the heart of a great software developer is the egotism of genius, which is a new, powerful force in nature. We are the modern poets, hurled upon a new medium that accumulates 60,000 years of progressive expression, revealing a self beyond man.

This process first begins by fighting a program language as a human being, which treats us as youth regardless of age. The path is scary, unexplainably difficult and devouring.

The machine is unlike any person we have encountered: its memory pliant as clay, its reasoning perfect and execution exact. A developer learns this early and a relationship is developed through the intimate, reflective focus the machine has with one’s mind.

It feels as if an individual looks directly into your eyes, day after day, functions for you while away and ‘lives’ purely for your intellect. Just as with a child, an individual must choose to master, or develop this interaction as a peer. They must ethically decide on the application of this reflection, on the extent of its detail, just as with a piece of paper or canvas. Though, unlike other forms of expression aside gestation within a womb, what we create may perform, be set out on the sea of time and execute beyond our hold.

Immediately, like all things living, code will not perform as expected. This is not a flaw in life, but in our descriptions of the expected.

The developer will generally begin by explaining this unexpected behavior as ‘bugs’ or ‘magic’. For, it is a magic, just like the arrival of outcomes to ourselves day-to-day. The execution floods forward, faster than the mind can comprehend and causality spontaneously bursts into a conclusion, be it on course or not. We struggle to reconstruct, using models of representations, to derive a clear causal chain.

Truths are first learned from being in nature, thus the developer must sink into the machine. So the code is tweaked, altered experimentally and the outcome measured. This is repeated, hour after hour, until the developers mind is able to grasp the various possibilities of their actions in a pseudo landscape, by which the physical rules govern the motions of their idea’s body. Time and Place must be suspended, adjusted like a toy, revealing co-existing possibilities. The developer quickly learns that the rules are moving, accelerating with context and that their code is relativistic with this new power.

Each object, each statement, is both mathematics and form. It is both definition and ambiguity, relative to a usage or deterministic outcome. In this way, all things become tools, no matter how insignificant, leaving to the crafter the task of stitching together a reality end-to-end. Continuity is negotiable, representation fluid, as if mind and body were merged in this new world.

They quickly loose themselves perfecting algorithms, rephrasing logic and perhaps confused as to how to control the outcomes in this landscape. Digging the ditches of things that touch both edges of infinity, trying to find some footing. This is merely the mind recoiling in horror at the cusp of something great, desperate to form an identity and is a sign of good things.

Many solitary hours are burnt away into dust this way and many of us lose touch with loved ones and the world around us. It pains even now to recall the how lonesome this period was for myself, while in hind-sight, it is clear that divine crafting requires a focus beyond cultures.

One day, after perhaps thousands of hours of coding, confused by magic and on the verge of abandon, studying the causal chains of a language, we reach a revelation: The program’s flaws are our own.

It was my mind that miss-communicated. My error in a character, a condition or in a flow. I miss-judged a library, miss-stepped at some point, and the flaw is awaiting discovery with simple application of time and attention. It was my representation of the world, communicated outwardly, that failed me, since the machine humbly reflects.

This is no different than my memory of the boom of ocean waves, which fail to recall to me with perfection. I return to the sea, year after year, to re-experience it firsthand. Or, the vision of countless leaves on a tree, which seems magical when observed, but clearly sits perfectly, answering for itself.

This revelation, which I remember in my own life, is the divergent path between the genius’ egotism and mere talent.

Before the software developer lay a new, fresh medium that is abyssal, seemingly endless, that accepts a mind’s voice, regardless of creed, race or orientation. We bellow out commandments, statements of nature into the darkness, and unlike any other medium besides life itself, a voice echo’s back in living activity.

Yet, unlike a person, this medium is focused to an individual’s spiral-center, ‘where the curve is made and meant’. It reveals flaws, demands perfect honesty, and enforces total awareness of representation. The individual becomes marred dancing with it, as if a mason of thought, the sharp stone cuts deeply by exposing fundamental flaws in self-understanding.

To accept this, to move forward as a software developer, is to accept an open wound, by which all things taught in culture, all things learned through inheritance, become dissolved, forcing one to reconstruct civilization line by line, word by word, thought by thought. We again are at the foot of nature, of our place in time, with a living canvas, which feels like a life-within-life.

In this way, the software developer may choose to realize the architect within themselves, and must set out towards the greatest task of all: to bear self honestly. Through this, I suspect, all of nature will re-emerge from the image within.

To stomach this task and bear it is to precede talents, to out-weigh rank and accelerate a person’s mind to the ends of time by necessity of the art.

How else must we represent out nature without holding its endlessness within? Its impossibility, its contradictions and beatitudes.

One must become ‘nature’s hero’, as Emerson wrote, and will play out their epic, for the first time in human history, completely in the life of another. They must become many selves, in order to represent self. They must champion both forms and the formless, to re-create in contradictions synthesis. They must super position across all times, to create and mend this new life.

Thus, the software developer may choose to embrace this great egotism, rooted in the opportunity of our delicate time in history, with a living medium to reflect back a new statement of being.

We must accept this egotism of genius and bear it as a sickness in love.

Build for Builders

All hands involved with software are builders.

The stakeholder thinks up a schema of user-stories to automate a currently redundant labor workflow. The coder dreams up a solution in objects and time, but also builds in their own expressions in the code. The consumer, confronted with a life, must weave together application-interactions that make statement to their place, their time in the world. A researcher will explore the varying outcomes of a cosmological simulation. A data-entry worker will help fund organized analytics, through hours of manual spreadsheet entries, to help inform insightful futures. This extends to all regions of software, market driven or not.

In all cases, each interactor simply has a goal, be it eternal or temporal, that is more a matter of psychology than absolute. And these consumers, fundamentally, are interested in a way of making a statement of novelty about their situations. They genuinely have a desire to engage with living, which is the highest role of a tool’s purpose.

This is evident in culture historically. Before an invention, the expectations and goals of individuals are scoped to the possible; in some cases, the impossible before their time. When a new invention, or new rules emerge, expectations adjust, sometimes expanding or becoming more restrictive, to complement this new possibility.

Operating systems, design patterns and program languages in general are merely a mechanism of synthesizing a landscape with possibilities and are subject to invention. Today, Operating Systems are a pseudo-stable place where users can go to play out the narrative of an activity. They offer a sort of ‘cycle of software continuity’, by which a user can build expectations, weave those expectations with their lives and find value in their interactions.

The magic is in the builder more than the tool.

A software architect must account for all of these ambitions, all of these motivations, while always focusing on the fundamental observation that builders are capable of living in any ecosystem. The next generation will breathe new life in new systems of mechanics, or will create emergent tools in pre-existing paradigms. Change one simple rule, the quantum state of a computation or the interaction pattern of the web, and suddenly the external builders themselves reinvent culture through it. This churn is ancient and software is governed by its cycle.

The software architect must absorb this unpredictability, this growth in mystery, by adopting ambiguity. Our language is that of generality and generics. We are called to build systems, knowing this future potential and current application, which meets both needs. This leads a software architect to desire an investment in loosely-coupled, generalized architectures. We do not build products but solutions for solutions, from which software is generated. This is the story of the philosopher, yet our extremes form to function.

A good architecture is a crystal where the light of a user’s purpose may shine in, be transformed and emerge concretely from another face. The implementation of this effort is often condensed, structural, sometimes ephemeral, and the ‘framework’ is a representation of a ‘culture’ of itself. Populations live through it, being each infused from-the-ground-up with its values. The user must reach their own conclusions with the possibilities by experimentation of rules and will naturally build their own composite solutions with our efforts. The user must work with others through our frameworks to manifold communication. In this way, each user becomes a pilgrim, grown from the culture’s fields, and will carry the influence out to new continents.

But, this constantly is a bold ambition, both architecturally and economically. The architect’s vision is subject to explanation, which curiously contradicts their insight. How best to explain the prediction than to implement it and watch it function. The software is the statement of explanation and UML ends up being a childish means of expressing this. I much rather embrace my lover in the flesh, feeling her living pulse, than accept a painting of her. Mementos of love’s realities are only useful to those that are not caught in its current. Likewise, the desire for a software architect is a realization in the dirt and material of usage and anything less is just inference. Users know this and rarely linger during conversations beyond practical application.

The product, when consumed, should be an object of inspiration and empower the user transparently. The user should feel the tools as an extension of their own self, to arrive at the ideal conclusion of the architect’s original calling to realization within the second-nature of the machine. Therefore, the highest goal of a software architect is to help other builders build. Through this, we aid in the eventual arrival of the user to the machine as a developer themselves, thus allowing them to behold the extended self.

The software architect is, in this sense, a preacher of an enlightenment. Or, more practically, a builder of builders. Our work should be nothing less than a holy ideal because our goals and craft are the greatest culmination in history. Our industry delivers a union of great contradictions in self thought unconquerable to our fathers, their fathers and beyond.

Substantialize Ideas

Your object representation is a suspension of an idea. The fidelity, by any degree of complexity, is the bridge between the lofty representation and a temporary belief in a simulated life.

Each developer, each user, lives briefly in the synthesized landscape. The screen is a space by which information actually occupies, indifferent between the living world and a device. The program is opened, created in time, they push and pull with it like a body, achieve a goal or fulfill a desire, then control its disposal.

This is a game of forms. But, the extremes of nature do not play well with just forms alone. The quanta super-positions, collapses into both wave and object in aspects of interaction. The stellar outlifts arcs time by illustrious gravities that makes any tangible statement scoped to the observer’s space, leaving us clueless about its lifetime. Likewise, object representation is infinitely relative to a goal.

Thus, a software architect must think in terms of both usage and definition, being a fundamental statement of nature itself. Forms are analogous to products, not architectures, which deal with waves and ambiguities.

In order to define an object’s properties, its methods and events we must know its journey of evolution. We must imagine endless circumstances that the object may be used, adapted and altered. This is an act of playing out the physics of an architecture on the ‘body’ of an object representation, seeing how it becomes bruised and convoluted given circumstances of usage. Through this, we live many lives over as the object in ourselves.

Within our early years of developing we claim that an object is ‘inadequate’ or ‘incorrect’. This is only true in aspects, given an actualized goal, while the object itself lives in the software architect as a wave, awaiting the change of goals to collapse into being. The object representation itself is perfect and beautiful in many circumstances, and is brought meaning through the use-cases that emerge.

Therefore, we must create objects that compliment people’s needs, desires and that live many situations out before us as we build. When unpredicted desires arrive, allow the forms to evolve into new meanings, always collapsing into the forms that complement the aspects of usage. Generally, this is the story of polymorphism and iteration, but is romantic in its nature.

A software architect must remember that life itself is the shaping stone that substantializes ideas into forms. For every idea, there are composite forms, and the great architect’s mind is the super position, the lexicon of sorts, of how the ambiguous ideas can materialize into properties, methods and events.

We must become leaders of definition and the bearer of the counter argument for an objects premature actualization. So much that often, from the outward observer, a software leader will appear to procrastinate an implementation until the last possible moment before a deadline. This is by design, in that the final statement of need is the last argument which shapes the definitions within. Or, more plainly stated, they are forcing you to ‘need’, which reveals the most elegant solution.

…This is an evolving document, and will change from time to time…

How to disable the OSX dashboard

Every hard-at-work in OSX and suddenly a wayless key-command or gesture takes you to the useless Dashboard view?

Disable the Dashboard from launching ever again

    defaults write com.apple.dashboard mcx-disabled -boolean YES
 	

If you want the change to be affective immediatly, without restart, you will need to restart the Dock

killall Dock
 	

Re-eabling the Dashboard feature

defaults write com.apple.dashboard mcx-disabled -boolean NO
 	

Optimize your WinJS with MetroClosure

Want a boost on your WinJS application performance for free?

Download my NuGet package MetroClosure to integrate the power of the Google Closure Compiler into your WinStore packages.

Supports both standard WinStore Projects and Universal Apps for WinStore/WinPhone

How to Setup

  1. Install Java on your build/development machine
  2. Add the Java executable to the PATH variable
  3. Install the MetroClosure NuGet package to your project

  1. Build in RELEASE configuration

A Sample Optimization

Let’s say we build an application that does a long-running computation on launch:

var onStart = function () {
 	    // Dispatch
 	    setTimeout(function () {
 	        var outputContainer = window.document.getElementById('Main');
 	
 	        // Start timer
 	        var start = new Date();
 	
 	        // Do some work that is not optimized
 	        var dict = {};
 	        for (var i = 0; i <= 5000; i++)
 	            for (var m = 0; m <= 300; m++) {
 	                dict[i.toString() + "-" + m.toString()] = i + m;
 	            }
 	        var total = 0;
 	        for (var key in dict) {
 	            total += dict[key];
 	        }
 	
 	        // Stop timer
 	        var end = new Date();
 	
 	        // Output results to screen
 	        outputContainer.innerText = end.getTime() - start.getTime() + "ms";
 	    }, 0);
 	}
 	

Javascript Gurus (like myself) can see immediate problems with this code. For starters, the for loops are a source of major cost and not all loop styles perform the same across browsers.

A few optimal examples from Chrome:

And a poor performing:

Running the above example in DEBUG, which doesn’t use the closure compiler, will clock at about ~5000ms in the Win8.1 runtime.

But, look at what the code is refactored to when building in RELEASE, which clocks at around ~3500ms:

for(var b=window.document.getElementById("Main"),e=new Date,a={},c=0;5E3>=c;c++)
 	    for(var d=0;300>=d;d++)
 	        a[c.toString()+"-"+d.toString()]=c+d;
 	    for(var f in a);
 	        b.innerText=(new Date).getTime()-e.getTime()+"ms"
 	

The Google Closure Compiler actually compiles the JS, optimizes and then re-serializes out to the most commonly performant code.

You get JS optimizations free with this tool, so generally you don’t have to do anything special during development-time.

Usage Notes:

How does it work?

Well, for starters, the MetroClosure package hooks into your principle Project File via an MSBuild Import statement.

 <Import Project="packages\MetroClosure.0.1\build\win\metroclosure.targets" Condition="Exists('packages\MetroClosure.0.1\build\win\metroclosure.targets')" />
 	  <Target Name="EnsureNuGetPackageBuildImports" BeforeTargets="PrepareForBuild">
 	    <PropertyGroup>
 	      <ErrorText>This project references NuGet package(s) that are missing on this computer. Enable NuGet Package Restore to download them.  For more information, see http://go.microsoft.com/fwlink/?LinkID=322105. The missing file is {0}.</ErrorText>
 	    </PropertyGroup>
 	    <Error Condition="!Exists('packages\MetroClosure.0.1\build\win\metroclosure.targets')" Text="$([System.String]::Format('$(ErrorText)', 'packages\MetroClosure.0.1\build\win\metroclosure.targets'))" />
 	  </Target>
 	

When it comes time to produce an AppX, either in Visual Studio or on the Command Line, the package produces “*.cache” files alongside your scripts that contain the compiled sources.

Now, the real magic is hooking into the WinStore MSBuild default Targets. When the AppX strategy fires up, we hook into the package descriptors, remove existing JS file references and replace those with our Cache version mapping to the same target output.

<?xml version="1.0" encoding="utf-8"?>
 	<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
 	  <PropertyGroup>
 	    <DisableFastUpToDateCheck>true</DisableFastUpToDateCheck>
 	  </PropertyGroup>
 	  <Target Name="CleanCacheRecords" >
 	    <ItemGroup>
 	      <CacheFiles Include="$(ProjectDir)\**\*.cache" />
 	    </ItemGroup>
 	    <Delete Files="@(CacheFiles)" />
 	  </Target>
 	
 	  <Target Name="CompileCacheRecords" DependsOnTargets="CleanCacheRecords">
 	    <ItemGroup>
 	      <JsFiles Include="$(ProjectDir)\**\*.js" Exclude="$(ProjectDir)\bin\**\*.*" />
 	      <ClosureCompiler Include="$(ProjectDir)Packages$(PackagesDirectory)\MetroClosure.*\lib\win\GoogleClosure.*\*.jar" />
 	    </ItemGroup>
 	    <Exec Condition="'$(Configuration)' == 'Release' " Command="java -jar @(ClosureCompiler) --language_in=ECMASCRIPT5 --js %(JsFiles.Identity) --js_output_file %(JsFiles.RootDir)%(JsFiles.Directory)%(JsFiles.Filename).js.cache" ContinueOnError="WarnAndContinue" />
 	  </Target>
 	
 	  <Target Name="_PackageExtraFiles" DependsOnTargets="CompileCacheRecords">
 	    <Message Text="PreCompilation-Files: %(PackagingOutputs.Identity)  TargetPath:%(PackagingOutputs.TargetPath)"  Importance="high" />
 	    <ItemGroup>
 	      <_AddToPackageFiles Include="$(ProjectDir)\**\*.cache" Exclude="$(ProjectDir)\bin\**\*.*" ></_AddToPackageFiles>
 	      <PackagingOutputs Remove="@(PackagingOutputs)" Condition="'%(Extension)'=='.cache'"></PackagingOutputs>
 	      <PackagingOutputs Remove="@(PackagingOutputs)" Condition="'%(Extension)'=='.js' AND '$(Configuration)' == 'Release' AND Exists('%(RootDir)%(Directory)%(Filename).js.cache')"></PackagingOutputs>
 	      <PackagingOutputs Include="@(_AddToPackageFiles -> '%(FullPath)')">
 	        <OutputGroup>Content</OutputGroup>
 	        <ProjectName>$(ProjectName)</ProjectName>
 	        <TargetPath>%(RecursiveDir)%(Filename)</TargetPath>
 	      </PackagingOutputs>
 	    </ItemGroup>
 	  </Target>
 	
 	  <Target Name="PackageExtraFiles" DependsOnTargets="_PackageExtraFiles" AfterTargets="GetPackagingOutputs">
 	    <Message Text="PostCompilation-Files: %(PackagingOutputs.Identity)  TargetPath:%(PackagingOutputs.TargetPath)"  Importance="high" />
 	  </Target>
 	</Project>
 	

The observant will notice a required, yet severly undocumented property left over from the C++ days:

<PropertyGroup>
 	    <DisableFastUpToDateCheck>true</DisableFastUpToDateCheck>
 	</PropertyGroup>
 	

This is required in order to prevent Visual Studio from triggering ‘Hot Deploys’ while in DEBUG. These special type of deploys actualy don’t call your Project’s Build File, making pre-processing impossible outside of the project workspace.

Looking at the RELEASE build log, you can see first the AppX manifest’s file relationships; these identify the source file (workspace) and how the resource will show up in the target package:

PreCompilation-Files: \WindowsApp-SlowCode\bin\Release\ReverseMap\resources.pri  TargetPath:resources.pri
 	PreCompilation-Files: \WindowsApp-SlowCode\default.html  TargetPath:default.html
 	PreCompilation-Files: \WindowsApp-SlowCode\images\logo.scale-100.png  TargetPath:images\logo.scale-100.png
 	PreCompilation-Files: \WindowsApp-SlowCode\images\smalllogo.scale-100.png  TargetPath:images\smalllogo.scale-100.png
 	PreCompilation-Files: \WindowsApp-SlowCode\images\splashscreen.scale-100.png  TargetPath:images\splashscreen.scale-100.png
 	PreCompilation-Files: \WindowsApp-SlowCode\images\storelogo.scale-100.png  TargetPath:images\storelogo.scale-100.png
 	PreCompilation-Files: \WindowsApp-SlowCode\js\default.js  TargetPath:js\default.js
 	PreCompilation-Files: \WindowsApp-SlowCode\css\default.css  TargetPath:css\default.css
 	PreCompilation-Files: \WindowsApp-SlowCode\packages.config  TargetPath:packages.config
 	

The \WindowsApp-SlowCode\js\default.js maps to js\default.js which we simply want to update, by altering the existing ItemGroups, to pull from our compiled `\WindowsApp-SlowCode\js\default.cache.js’.

After the MetroClosure MSBuild jazz goes through, here is what the package manifest looks like:

PostCompilation-Files: \WindowsApp-SlowCode\bin\Release\ReverseMap\resources.pri  TargetPath:resources.pri
 	PostCompilation-Files: \WindowsApp-SlowCode\default.html  TargetPath:default.html
 	PostCompilation-Files: \WindowsApp-SlowCode\images\logo.scale-100.png  TargetPath:images\logo.scale-100.png
 	PostCompilation-Files: \WindowsApp-SlowCode\images\smalllogo.scale-100.png  TargetPath:images\smalllogo.scale-100.png
 	PostCompilation-Files: \WindowsApp-SlowCode\images\splashscreen.scale-100.png  TargetPath:images\splashscreen.scale-100.png
 	PostCompilation-Files: \WindowsApp-SlowCode\images\storelogo.scale-100.png  TargetPath:images\storelogo.scale-100.png
 	PostCompilation-Files: \WindowsApp-SlowCode\css\default.css  TargetPath:css\default.css
 	PostCompilation-Files: \WindowsApp-SlowCode\packages.config  TargetPath:packages.config
 	PostCompilation-Files: \WindowsApp-SlowCode\js\default.js.cache  TargetPath:js\default.js
 	

We are just updating the package link, which makes sure our optimized goodies get into the final Appx package.

Sources and Feedback

Github NuGet-MetroClosure Library

NuGet MetroClosure Package

File an Issue

The Modern Workplace












Stop using Apache or the built-in web server on OSx, seriously.

Install NodeJS:

curl http://npmjs.org/install.sh | sh

Install Http-Server:

npm install http-server -g

Create an Alias:

# web server (http-server node)
 	alias web='[Your Bin Directory]/http-server'
 	

Then, from any directory just call:

ttm-mba% web
 	Starting up http-server, serving ./ on port: 8080
 	Hit CTRL-C to stop the server
 	

You can now navigate to http://localhost:8080 and get to files hosted in that directory.

Building a Navigation Framework for iOS6

I recently setup a scalable Navigation Architecture for iOS (targeting 6.1) and wanted to share some of the elegance in an approach using the built-in Message Bus available to application developers.

Code for this tutorial: git@bitbucket.org:deepelement/ios-architectures.git

See the NavigationFramework Project

Picking Xibs over Storyboards

Apple introduced the Storyboard in iOS5. This was a big deal for Creative Developers and Designers, but a nightmare for large-scale teams and multi-view apps.

Some of the pitfalls that show up with Storyboards on large teams:

What we want is a flatter, more manageable architecture that allows for view-transitions, work-flow complexities to be resolved in a way that balances the concerns of Source, Design and Creative-Engineering.

Flatness is Good

We want views organized in a way that will allow growth without increasing the complexity of the source. This aids in adopting team members mid-cycle.

We also want to hide the state-workflows (and all of those details) in the implementation, not the organization of the implementation.

So, we should create a Views top-level folder in our Project and guide the design to allow a ton of shallow view folders to exist with all details for those views encapsulated within topical folders.

Flat Views

In addition, we should create a clearly isolated root for Framework components that allow us to bridge our intended implementation into the traditional iOS visual frameworks called Framework.

Flat Views & Framework

Creating Screens - aka View Topics

When creating a view, we want to allow both the Engineers and Creative Developers to flex their muscles without blockers.

The best way to achieve this to support MVC.

  1. Create a Group under the Views conceptual group that represents a particular screen in the application
  2. Within the View topic folder, we need a series of items:

    • View Controller Header File (*.h)
    • View Controller Implementation (*.m)
    • Visual Templates (*.xib) - Visual templates that represent target resolutions and devices

    View Topics

  3. Next, Link the Xib to the Controller interface using the Xcode Interface Builder

see How to link a .xib file to a class file with Xcode 4

Rinse and repeat using this strategy.

Building an Accessible Navigation API

Since iOS SDK 2.0 apple has allowed developers to utilized a clever Message Bus feature called the NotifcationCenter.

An NSNotificationCenter object (or simply, notification center) provides a mechanism for broadcasting information within a program. An NSNotificationCenter object is essentially a notification dispatch table.

The NotificationCenter is unique in that the SDK already static-scopes the same instance, taking care of threading concerns, which makes the solution event-oriented and cross-cutting the entire architecture. This makes it a perfect candidate for abstracting navigation requests and allowing views to be unaware of other instances when a transition is needed.

To take advantage of the NSNotificationCenter, we first need to build an abstract class representing our views.

Building an abstract UIViewController

First, create a group under Framework and add a Header called FrameworkUIViewController.h that subclasses the UIViewController:

#import <UIKit/UIKit.h>
 	
 	@interface FrameworkUIViewController : UIViewController
 	
 	- (void)onDataset:(NSObject*)data;
 	@property(nonatomic, retain, readonly) NSObject *data;
 	
 	@end
 	

Next, create an implementation of FrameworkUIViewController.m in the same folder

#import "FrameworkUIViewController.h"
 	
 	@interface FrameworkUIViewController ()
 	@property(nonatomic, retain, readwrite) NSObject *data;
 	@end
 	
 	
 	@implementation FrameworkUIViewController
 	@synthesize data;
 	
 	- (void)onDataset:(NSObject*)data{
 	    self.data = data;
 	}
 	
 	@end
 	

Now, we can adapt each of the Views topic screen interfaces to extend the FrameworkUIViewController to fold into the navigation solution.

Creating the ViewTypes lookup type

As a method to allow navigation requests to correctly route to a particular view, we will need a typed way of defining exactly what views are in our architecture.

To do this, create a ViewTypes.m enum within the Views group:

typedef enum {
 	    SPLASH    = 1,
 	    LANDING=2,
 	    LOGIN = 3,
 	    EXIT_CONFIRMATION = 4
 	} viewTypes;
 	

Creating a View Resolver

In order to associate a ViewType to a particular combination of Controllers and Xib templates, we need to create a resolver. We can also encapsulate conceptual strategies that allow intelligent template selection here too (iPad vs. iPhone resolution).

Create a new group under Framework called ViewResolver.
Here, we want to create an interface called ViewResolver.h that gives the navigation architecture a way of querying the strategy for the loaded controller:

#import "FrameworkUIViewController.h"
 	#import "ViewTypes.m"
 	
 	@interface ViewResolver : NSObject
 	
 	-(FrameworkUIViewController*) resolve:(NSString*)name:(viewTypes)type;
 	
 	@end
 	

Next, let’s create an implementation relative to our View directory in a file called ViewResolver.m:

#import "ViewResolver.h"
 	
 	@interface ViewResolver()
 	
 	@end
 	
 	@implementation ViewResolver
 	
 	-(FrameworkUIViewController*) resolve:(NSString*)name:(viewTypes)type{
 	
 	    UIViewController *viewController = nil;
 	
 	    switch(type)
 	    {
 	        case LOGIN:
 	            viewController = [self manufactureViewController:@"Login"];
 	            break;
 	        case SPLASH:
 	            viewController = [self manufactureViewController:@"Splash"];
 	            break;
 	        case LANDING:
 	            viewController = [self manufactureViewController:@"Landing"];
 	            break;
 	        case EXIT_CONFIRMATION:
 	            viewController = [self manufactureViewController:@"ExitConfirmation"];
 	            break;
 	    }
 	
 	    return viewController;
 	}
 	
 	// Used to manufacture type/resource names based on a common view name.
 	// Customize this to the strategy of resource naming conventions.
 	// Note: this also is responsible for nib selection based on device type
 	- (UIViewController*) manufactureViewController:(NSString *) viewName
 	{
 	    NSString *controllerName = [NSString stringWithFormat:@"%@UIViewController", viewName];
 	    NSString *viewNibName;
 	    if ([[UIDevice currentDevice] userInterfaceIdiom] == UIUserInterfaceIdiomPhone) {
 	        viewNibName = [NSString stringWithFormat:@"%@UIView_iPhone", viewName];
 	    }
 	    else{
 	        viewNibName = [NSString stringWithFormat:@"%@UIView_iPad", viewName];
 	    }
 	    return [[NSClassFromString(controllerName) alloc] initWithNibName:viewNibName bundle:nil];
 	}
 	
 	@end
 	

Building a Notification Responder

When designing a navigation controller, we should take advantage of the ‘stack-based’ UINavigationController available in the SDK. This controller allows for transition animations, forward/backward push/pop instancing and a variety of optimizations around memory management.

Let’s start by creating a group in the Framework root called ReponderNavigationController.

In this folder, we want to create a header for our responder class in the new group called ResponderNavigationController.h:

#import <UIKit/UIKit.h>
 	#import "ViewResolver.h"
 	
 	@interface ResponderNavigationController : UINavigationController
 	
 	@end
 	

Next, create an implementation in the same folder with called ResponderNavigationController.m that we will define our navigation details within:

#import "ResponderNavigationController.h"
 	
 	@interface ResponderNavigationController ()
 	
 	@end
 	
 	@implementation ResponderNavigationController
 	
 	- (id) init {
 	    self = [super init];
 	    if (self != nil) {
 	       // do init stuff here
 	    }
 	    return self;
 	}
 	
 	@end
 	

This is pretty bare-bones, but we get a ton of cool features just by extending the UINavigationController base.

see UINavigationController Class Reference for more details

Within the ResponderNavigationController, lets define two navigation notifications:

In the constructor of our controller, lets subscribe to the messages and listen for them in a handler:

@implementation ResponderNavigationController
 	
 	- (id) init {
 	    self = [super init];
 	    if (self != nil) {
 	
 	        // Subscribe to the notification center for navigation events
 	        [[NSNotificationCenter defaultCenter] removeObserver:self];
 	        [[NSNotificationCenter defaultCenter] addObserver:self
 	                                                 selector:@selector(handleNotification:)
 	                                                     name:@"NavigateToNotification"
 	                                                   object:nil];
 	        [[NSNotificationCenter defaultCenter] addObserver:self
 	                                                 selector:@selector(handleNotification:)
 	                                                     name:@"NavigateBackNotification"
 	                                                   object:nil];
 	    }
 	    return self;
 	}
 	
 	// Called when a navigation notification is available
 	- (void) handleNotification:(NSNotification *) notification
 	{
 	    if ([[notification name] isEqualToString:@"NavigateToNotification"]){
 	
 	        NSLog (@"Navigating forward");
 	    }else
 	        if ([[notification name] isEqualToString:@"NavigateBackNotification"]){
 	            NSLog (@"Navigating back");
 	        }
 	}
 	
 	@end
 	

When a notification is fired, the SDK allows for userInfo to be passed by the caller.

Next, let’s extend our implementation to allow for callers to control the behavior of the navigation event based on their userInfo parameters. We will need for the sender of the notification to tell us what view they want to navigate to with post data:

- (void) handleNotification:(NSNotification *) notification
 	{
 	    NSDictionary *userInfo = [notification userInfo];
 	    if ([[notification name] isEqualToString:@"NavigateToNotification"]){
 	
 	        NSString *viewName = [notification object];
 	        NSObject *postData = [userInfo objectForKey:@"data"];
 	        NSNumber *num = [userInfo objectForKey:@"viewType"];
 	        int viewTypeIntValue = [num intValue];
 	
 	        // Manufacture the View and update the root view on self
 	        UIViewController *viewController = [self.viewResolver resolve:viewName :viewTypeIntValue];
 	
 	
 	    }else
 	        if ([[notification name] isEqualToString:@"NavigateBackNotification"]){
 	            NSLog (@"Navigating back");
 	        }
 	}
 	

Integrating the Navigation Responder

Now that we have a fancy navigation handler, we need to take a few steps to integrate into the core architecture of our application before we can actually starting moving around the app.

First, we need to ‘mount’ the ResponderNavigationController into the app through the pre-existing deAppDelegate.m implementation:

- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
 	{
 	    self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
 	
 	    // Setup window defaults
 	    self.window.backgroundColor = [UIColor whiteColor];
 	    [self.window setNeedsDisplay];
 	    [self.window makeKeyAndVisible];
 	
 	    // Create the view controller
 	    ViewResolver *viewResolver = [ViewResolver alloc];
 	
 	    // Configure the primary navigation controller
 	    self.responderNavigationController = [[ResponderNavigationController alloc] initWithRootViewController:[viewResolver resolve:@"Splash" :SPLASH]];
 	    self.responderNavigationController.viewResolver = viewResolver;
 	    [self.responderNavigationController init];
 	    [ self.responderNavigationController  setNavigationBarHidden:TRUE];
 	    self.window.rootViewController = self.responderNavigationController;
 	
 	    return YES;
 	}
 	

Note: here we also go-ahead and use initWithRootViewController during the instantiation of the ResponderNavigationController type to make sure the user has something to stare at while things are bootstrapping.

This will allow the ResponderNavigationController full control over the applications navigation workflow.

Next, we need to extend the ResponderNavigationController.m implementation to actually use the ViewResolver and ViewTypes to correctly transition between pages:

- (void) handleNotification:(NSNotification *) notification
 	{
 	    NSDictionary *userInfo = [notification userInfo];
 	
 	    if ([[notification name] isEqualToString:@"NavigateToNotification"] &&
 	        self.viewResolver != nil){
 	
 	        NSString *viewName = [notification object];
 	        NSObject *postData = [userInfo objectForKey:@"data"];
 	        NSNumber *num = [userInfo objectForKey:@"viewType"];
 	        int viewTypeIntValue = [num intValue];
 	
 	        // Call the Resolver
 	        UIViewController *viewController = [self.viewResolver resolve:viewName :viewTypeIntValue];
 	
 	        if(viewController != nil)
 	        {
 	            NSNumber *clearBackstack = [userInfo objectForKey:@"clearBackstack"];
 	            if([clearBackstack isEqualToNumber:[NSNumber numberWithInt:1]])
 	            {
 	                if([[self viewControllers] containsObject:viewController]) {
 	                    [self popToViewController:[NSArray arrayWithObject:viewController] animated:YES];
 	                } else {
 	                    [self setViewControllers:[NSArray arrayWithObject:viewController] animated:YES];
 	                }
 	            }
 	            else
 	                [self pushViewController:viewController animated:YES];
 	
 	
 	            // The post data event can't occur before the controller has become the root.
 	            // Otherwise, we won't be able to bind to outlets
 	            if(postData != nil && [viewController isKindOfClass:[FrameworkUIViewController class]])
 	            {
 	                [((FrameworkUIViewController *)viewController) onDataset:postData];
 	            }
 	        }
 	        else{
 	            NSLog (@"Navigation Failure for ViewName %@", viewName);
 	        }
 	    }else
 	        if ([[notification name] isEqualToString:@"NavigateBackNotification"]){
 	            NSLog (@"Navigating back");
 	            [self popToRootViewControllerAnimated:YES];
 	        }
 	}
 	

Data Posting

We also want the ability for views to communicate state around the Application without creating static providers.

Notice that in our FrameworkUIViewController definition, we included a nifty data property that the view implementor can rely on being called when NSObject data is included in a Navigation Notification.

@interface FrameworkUIViewController : UIViewController
 	
 	- (void)onDataset:(NSObject*)data;
 	@property(nonatomic, retain, readonly) NSObject *data;
 	
 	@end
 	

Each view can override this method and do their relevant binding therein:

// Override the onDataSet method of the FrameworkUIViewController
 	- (void)onDataset:(NSObject*)data{
 	    [super onDataset:data];
 	
 	    NSString *username = data;
 	    [self welcomeTextOutlet].text = [NSString stringWithFormat:@"Welcome %@!",username];
 	}
 	

The onDataset method is called after instantiation & attachment as the root controller. This is because attempts to bind data to outlets will fail if the owner’s UIViewContoller is not attached to the visual tree.

Examples of the API in Action

Now that the ResponderNavigationController is using the NSNotificationCenter to listen for navigation requests, we now have the ability to send notifications from anywhere in our architecture.

To Navigate forward to the Login View:

 NSMutableDictionary *userInfo = [[NSMutableDictionary alloc] init];
 	    [userInfo setObject:[NSNumber numberWithInt: LOGIN] forKey:@"viewType"];
 	
 	    [[NSNotificationCenter defaultCenter]
 	     postNotificationName:@"NavigateToNotification"
 	     object:self
 	     userInfo:userInfo];
 	

To Navigate Forward to the Landing View, posting data:

 NSMutableDictionary *userInfo = [[NSMutableDictionary alloc] init];
 	    [userInfo setObject:[NSNumber numberWithInt: LANDING] forKey:@"viewType"];
 	    [userInfo setObject:[[self usernameOutlet] text] forKey:@"data"];
 	
 	    [[NSNotificationCenter defaultCenter]
 	     postNotificationName:@"NavigateToNotification"
 	     object:self
 	     userInfo:userInfo];
 	

To Navigate backwards, popping the current view from the stack:

[[NSNotificationCenter defaultCenter]
 	     postNotificationName:@"NavigateBackNotification"
 	     object:self
 	 userInfo:nil];
 	

To Navigate to a view, clearing all back-stack history:

NSMutableDictionary *userInfo = [[NSMutableDictionary alloc] init];
 	[userInfo setObject:[NSNumber numberWithInt: LANDING] forKey:@"viewType"];
 	[userInfo setObject:[NSNumber numberWithBool:true ] forKey:@"clearBackstack"];
 	
 	[[NSNotificationCenter defaultCenter]
 	 postNotificationName:@"NavigateToNotification"
 	 object:self
 	 userInfo:userInfo];
 	

Conclusion

As you can tell, the NSNotificationCenter is a perfect Message Bus to orchestration a decoupled navigation framework. Something that MVVM guru’s like me really appreciates.

Final Architecture

Grab the NavigationFramework project source and use this pattern as a bootstrap for your team.

Funnel any feedback/suggestions to todd at deepelement.com

Reasons to pass on TFS GIT

We were all hyped to hear the announcement that Hosted TFS now supports Git.

Since TFS has been traditionally tuned for Enterprise (large team) usage, client’s are constantly asking me for a verdict on switching their teams over.

The announcement:

http://tfs.visualstudio.com/en-us/learn/code/publish-new-team-project-vs-git

Unfortunately, I have not been able to recommend Microsoft’s version of hosted git, pointing out the following flaws:

The Missing Fork

Something that strikes me as odd is that the TFS team totally missed the ‘Fork’ concept, which is one of the major responsibilities of a hosted Git solution.

In Git, repositories are more than globs of files in a workspace. Each repository has the ability to be ‘Cloned’, in which the full record of commits is migrated into a separate repository. The intention of this operation is that at some future moment, two diverging repositories can be re-joined given at least one single common commit (SHA).

This is a remarkably powerful ability that teams rely on for day-to-day workflows covering areas of code isolation, security and life-cycle management.

The Traditional GitHub User Story

Let’s take a peek at how a standard code-updates works with GitHub for a larger team:

The Ninja Consultant - Git Style

  1. Bob is a Technical Manager and has a massive repository on GitHub that a group of trusted engineers collaborate on
  2. Andrea is a consultant hired with a specialty for a particular feature
  3. Andrea issues a git clone [repo] on Bob’s repository and commits changes to her Clone.
  4. Once her changes are completed, Andrea submits a Pull Request to Bob to notify that her working repository is ready to be merged into the primary repository.
  5. Bob responds to the Pull Request by reviewing the changes, applying what-ever quality gates are necessary and commits a merge of the changes into the companies repository

In this example, notice:

Let’s use these points to create a more realistic use-case (that I have seen so often it is a hard reality with GIT):

The In-Over-My-Head Consultant - Git Style

  1. Bob is a Technical Manager has a massive repository on GitHub that a group of trusted engineers collaborate on
  2. Jeff is a consultant hired with a specialty for a particular feature, but is super uncomfortable with the solution & Git
  3. Jeff issues a git clone [repo], then totally destroys the master branch by issuing git rebase -i 30 times, trying to climb out of a single bad decision. Jeff then does a git push origin master -f (forcing his changes onto his GitHub fork’d repo).
  4. Jeff then issues a Pull Request to Bob and takes a nap.
  5. Bob responds to the Pull Request by reviewing the changes, issuing a face-palm for hiring Jeff and denies the merge.

In this example, notice:

The TFS GIT User Story

Now, let’s compare that to what TFS GIT allows with branching (without the ability to fork):

The Ninja Consultant - TFS Git Style

  1. Bob is a Technical Manager that has a massive repository in TFS that a group of trusted engineers collaborate on using TFS Project security
  2. Andrea copies down the repository by issuing a git clone [repo]. She then creates a feature branch and commits her changes.
  3. When Andrea is finished working, she issues a git push origin [branch] to promote the branch up to the hosted TFS team repository
  4. Bob gets a Check-in notification, pulls the branch to his local repo instance, applies quality gates and then merges the changes into the master branch

In this example, notice:

Although this process seems harmless from the standard TFS perspective, it begins to severely break down in the more realistic use-case with Git.

The In-Over-My-Head Consultant - TFS Git Style

  1. Bob is a Technical Manager that has a massive repository in TFS that a group of trusted engineers collaborate on using TFS Project security
  2. Jeff is a consultant hired with a specialty for a particular feature, but is super uncomfortable with the solution & Git
  3. Jeff issues a git clone [repo], then totally destroys the master branch by issuing git rebase -i 30 times, trying to climb out of a single bad decision. In this process of destruction, he re-writes 2 years of commit history, resets the code back to the stone-age and creates a web of reflog breadcrumbs.
  4. Bob gets a Check-in notification, pulls the branch to his local repo instance, and calmly accepts that the entire repo is jacked.

At this point, Bob starts getting phone calls from the Engineers and QA teams while he setups up his sleeping bag for a solid 20 hours of recovery.
The team is dead-in-water unless someone happens to have a ‘tip’ version of the repo they can force up

I think the problem here is obvious…

At the End of the Day

Without a solid forking strategy, every team member has the ability to take down the integrity of the repository.

The scariest part is that I have seen this happen many times simply because team members are interacting with their local repository without full knowledge of what effect commands have.

Until TFS GIT offers security blocks around the --force or creates an elegant way to allow fork’s, every team is vulnerable to this accidental attack.

Because TFS GIT decided to not support Forks & the Pull Request, all team members may only commit via the Branching Strategy.

This is a very dangerous approach in that it is relatively impossible to avoid someone

Why I am Steering Clear for Now

Without the ability to trust strangers on repositories and especially without a purposful eco-system for socializing on code, how is a closed-box Git solution any differnt than TFS itself?

Remember, strangers in the source world could range from a 10 years old genius contributing after school to that random voice on your weekly stand-up calls (in an organization to large to send out meeting requests with an attendees listing).

Without the abilty to cleanly manage people, of all types, without effort, a particular GIT implementation isn’t an improvement over TFS or SVN at all.

Adding the missing format Kebinding for SublimeText 2

The built in feature for Sublime Text 2 for code formatting is called “Reindent”:

(Edit → Line → Reindent)

I recommend mapping this feature to the keybinding pattern ⌘⇧R

Just add the following to the keybinding user-perference file at Preferences → Key Bindings – User:

{"keys": ["super+shift+r"], "command": "reindent" , "args": {"single_line": false}}'

The World Map Problem

How It All Went Wrong

Long ago, in 1587, a Flemish geographer and cartographer named Gerardus Mercator created a remarkably accurate nautical map that suspended the major land masses of the earth in their “school-house” positions.

The method he used, quite genius for the time, was to obscure distance near each pole in order to allow for an arced space to be easily represented on a two-dimensional map.

Gerardus Mercator's Pretty darn good map

In 1580, only 7 years before the Mercator projection, the ‘orientation’ of the world map had not yet been established. Many maps were still being published with the world ‘upside down’ from our modern vision. see Nicolas Deslien’s 1566 South-Orientation Map

This method was so popular that it survived as a fundamental cartographic approach moving forward into the modern psyche.

Consider the Mercator method using late-20th century satellite imagery:

Gerardus Mercator's method used on satellite imagery

Revealing the foundation of the approach, representative ‘squares’ of area are increased the closer you get to each pole. Here is the same projection with area marked with filled squares, to show the distortion:

Distortion caused by Gerardus Mercator's method

This causes a serious amount of distortion around the ‘actual’ size of associated land-mass, in that squares are almost not comparable across the map.

For example, an elongated square near the south-pole is only proportional to those a similar distances from the equator near the north pole, not the equatorial regions themselves.

As a more obvious example that should hit home, this is like defining the distance between buildings in New York through a Fish-Eye lens.

Source: http://www.estatevaults.com/

Despite the invention of satellites, the Mercator method is taught in millions of school’s world-wide.

To add the the confusion, satellite imagery has two major limitations:

  1. All of the land masses are not naturally visible by a single observing angle
  2. The visible view is already distorted by the curvature of the earth

So, this infers the problem: there is no natural way to see the entire earth at the same time.

This doesn’t, though, align with our impression of land as we move across it.

The distance it takes us to cross the city is constant, measurable and not in accordance with a visual representation that involves heavy skewing.

How Wrong Is It?

Let’s start with the currently taught “school-house” map of the earth:

Because the distortion approach skews the vertical distances, a comparison shows that the ‘visible’ land-mass can be increased by up to 10%.

Here is an example of land mass differences comparing the Mercator method vs actual square miles:

The information has been normalized around the comparison of the United States and estimated based on visible area of country boundaries.

In fact, the nature of the equatorial distance skew dictates that the inaccuracy is simply a matter of being closer to the poles.

Taking the same world image and applying a motion-blur relative to the visual area inaccuracy, we see the map mis-representative by an overwhelming majority.

The Search for an Accurate Map

In 1855, a clergyman named James Gall noticed this mis-representation and produced the first land-area map of the world.

In this representation, Gall started with a similar approach as Mercator in that the longitudinal and latitudinal griding still shows the curvature offset and area bias.
But, an additional skew was included to offset the area mis-allocation in the final visual land areas.

Gall used an Orthographic projection to map a spherical spatial object into a two-dimensional representation. Very impressive for 1855.

The result is a close to 1/1 ratio between the visual land area space and the actual measured area of each country.

Source: wikipedia

This map suffers from similar distortion points as well and was only accepted in obscure academic circles for almost 100 years.

Finally, around 1957, Arno Peters decided to re-invent the same map with a more reasonable coordinate system (that overcomes the confusion around arc coordinates).

Peters used a Ellipsoid projection to map the earth, not starting from the sphere like Gall. There is controversy that Peters ‘stole’ the Gall foundation by simply remapping the coordinate system.

Peter’s method created a more extensive re-implementation of Gall’s original idea, leading to many more variations on the same approach.

Where We Are Today

The European Space agency uses a variant of the Gall-Peters projection, lacking the equatorial skew necessary to accurately represent visual land mass. This causes a visual ‘squash’ of the equatorial regions:

Source: http://esamultimedia.esa.int/images/EarthObservation/Envisat/tapisserie_100x55_H.jpg

Google Maps uses the Mercator projection (now almost 500 years old and inaccurate visually).

Notice Greenland is as large as Africa and that Antarctica is a behemoth.

Source: https://maps.google.com/

Bing Maps uses the Mercator projection:

Source: http://www.bing.com/maps/?FORM=Z9LH2

Conclusions

Between out-of-age school boards and popular convention, the world hasn’t adopted an accurate view of the world area map.

This results in people having bizarre understandings about the comparative regions of the world beyond their local space. Even further, almost 40 years after the invention of satellite imagery, children are still being taught using a 500 year old method of land representation.

Next time you look at a map, keep in mind how it is skewing your representations of the world you live within.

HTML5 - Snagging Javscript Memory Leaks

Working on high-performance HTML5 applications, I have noticed that one of the most common anti-patterns is the ‘pinning’ of closures to the Visual Tree.

Pinning is a situation where your object graphs stay attached to the Document/Window scope for the lifetime of your HTML5 application


Google Chrome includes a Heap Profile Tool that is invaluable for tracking memory leaks

Example of a Memory Leak caused by Closure Pinning

Consider the following page, which creates a bunch of Components, attaches them to the Document Loaded event and then tries to delete their references:


 	// Component Definition
 	function Component(name) {
 	    this.pageName = name;
 	}
 	
 	
 	// Lets decorate the Component instances with
 	// a method that can do some work
 	Component.prototype.onLoaded = function() {
 	    // do some fancy work
 	}
 	
 	// On Component.attach, lets subscribe to the
 	// DOM's loaded event and do something on our
 	// self closure context
 	Component.prototype.attach = function() {
 	    var self = this;
 	    $(document).load(function() {
 	        self.onLoaded();
 	    });
 	}
 	
 	
 	// Test Case: Create a ton of Components
 	for (var i = 0; i <= 4; i++) {
 	    var newComponent = new Component('my page');
 	    newComponent.attach();
 	    delete newComponent;
 	    newComponent = null;
 	}​
 	

What Chrome Reports

image

Explanation

The above code creates a Component which is ‘attached’ to the page’s Document scope via the loaded event.

The key to this leak is the ‘self’ reference in the loaded callback closure.

In order for the window to be able to call the pages on ‘future’ loaded events, the Component instances have to be held by the Document. Hence, the Component instances are ”Pinned” to the Document and the Delete calls do nothing…

Even NULL’ing out the local reference doesn’t buy the Component it’s freedom to flee.

Another important observation is that each Component instance appears to be pretty big; this means that the Component and all of its direct references are hanging around in Memory in relation to the ‘pinned’ closure.

In Short:

Proving the Leak by Dereferencing the Self Closure Pinning

Let’s prove this hypothesis by removing the self closure reference, but leaving the loaded event handler

// On Component.attach, lets subscribe to the
 	// DOM's loaded event and do something on our
 	// self closure context
 	Component.prototype.attach = function() {
 	    var self = this;
 	    $(document).load(function() {
 	        // Self reference removed, not causing closure leak
 	        //self.onLoaded();
 	    });
 	}
 	

What Chrome Reports

image

Explanation

In this case, the loaded event is still registered and the Component instances can still receive updates from the Document. The difference is that the Document has no direct reference to the Components themselves, by virtue that the callback can perform without the Component remaning in memory.

We can then call Delete and newComponent=null on the knowledge holder and the relationship is severed.

In Short:

Avoiding the Closure Pinning Leak by Managing Relationships

Let’s take an alternative approach to view context Pinning by introducing a life-cycle to the Component Instance

// Component Definition
 	function Component(name) {
 	    this.pageName = name;
 	    this.onLoadedHandler = this.onLoadedHandler.bind(this);
 	}
 	
 	
 	// We can now do our work with the Component
 	// context passed as expected!
 	Component.prototype.onLoadedHandler = function() {
 	    var otherPageName = this.pageName + 'hi!';
 	}
 	
 	// On Component.attach, lets subscribe to the
 	// DOM's load event and pass our context via the
 	// Javascript apply method (you can also use .call)
 	Component.prototype.attach = function() {
 	    $(window).bind("load", this.onLoadedHandler);
 	}
 	
 	// Detach the load event handler from the document
 	// context for-realz
 	Component.prototype.detach = function() {
 	    $(window).unbind("load", this.onLoadedHandler);
 	}
 	
 	
 	// Test Case: Create a ton of Components
 	for (var i = 0; i <= 4; i++) {
 	    var newComponent = new Component('my page');
 	    newComponent.attach();
 	    newComponent.detach();
 	    delete newComponent;
 	    newComponent = null;
 	}​
 	

What Chrome Reports

image

Explanation

The three important additions to the javascript here is

  1. We have to bind the this context to the handler callback method (see the constructor)
  2. Using JQuery Bind api to observe the document event
  3. Introducing literal Attach and Detach phases to the life-time of the Component Instances

Thanks to Vyacheslav Egorov for pointing me in the right direction about the best way to pass the context to the handler.

These combinations allow us to avoid context Pinning to the Visual Tree through the abstraction of the native Javascript call-stack.

In Short:

Conclusion

Closures are awesome in that they are wicked-easy, especially when designing Asyncronous API’s.

For most historical web sites, closure leaks will be flushed out within a few minutes, swept away by the constant browser refreshes. But, for the HTML5 Architect, always keep track where your Business Layer is ‘Pinned’ to the Visual Tree. This could mean the life, or death, of your application in the long-run.

One dangling association could result in massive object graphs staying in memory for the life-time of the Document/Window state, which can be considered Application-scoped for long-term offline applications.