Thursday, March 19, 2015

Google Go ventures into Android app development

Google's Go language, which is centered on developer productivity and concurrent programming, can now be officially used for Android application development.

The capability is new in version 1.4, released this week. "The most notable new feature in this release is official support for Android. Using the support in the core and repository libraries, it is now possible to write simple Android apps using only Go code," said Andrew Gerrand, lead on the Google Cloud Platform developer relations team, in a blog post. "At this stage, the support libraries are still nascent and under heavy development. Early adopters should expect a bumpy ride, but we welcome the community to get involved."
Android commonly has leveraged Java programming on the Dalvik VM, with Dalvik replaced by ART (Android Run Time) in the recently released Android 5.0. OS. Open source Go, which features quick compilation to machine code, garbage collection, and concurrency mechanisms, expands options for Android developers. The upgrade can build binaries on ARM processors running Android, release notes state, and build a .so library to be loaded by an Android application using supporting packages in the mobile subrepository.
"Go is about making software simpler," said Gerrand in an email, "so naturally, application development should be simpler in Go. The Go Android APIs are designed for things like drawing on the screen, producing sounds, and handling touch events, which makes it a great solution for developing simple applications, like games."
Android could help Go grow, said analyst Stephen O'Grady, of RedMonk: "The Android support is very interesting, as it could eventually benefit the language much the same way Java has from the growth of the mobile platform."
Beyond the Android capabilities, version 1.4 improves garbage collection and features support for ARM processors on Native Client cross-platform technology, as well as for AMD64 on Plan 9. A fully concurrent collector will come in the next few releases.
Introduced in 2009, the language has been gaining adherents latelyGo 1.3, the predecessor to 1.4, arrived six months ago. Go, O'Grady said, "is growing at a healthy pace. It was just outside our top 20 the last time we ran our rankings [in June], and I would not be surprised to see it in the Top 20 when we run them in January."
Version 1.4 contains "a small language change, support for more operating systems and processor architectures and improvements to the tool chain and libraries," Gerrand said. It maintains backward compatibility with previous releases. "Most programs will run about the same speed or slightly faster in 1.4 than in 1.3; some will be slightly slower. There are many changes, making it hard to be precise about what to expect."
The change to the language is a tweak to the syntax of for-range loops, said Gerrand. "You may now write for range s { to loop over each item from s, without having to assign the value, loop index, or map key." The gocommand, meanwhile, has a new subcommand, called go generate, to automate the running of tools generating source code before compilation. The Go project with version 1.4 has been moved from Mercurial to Git for source code control.
This story, "Google Go ventures into Android app development" was originally published by InfoWorld.

7 timeless lessons of programming ‘graybeards’

In one episode 1.06 of the HBO series "Silicon Valley," Richard, the founder of a startup, gets into a bind and turns for help to a boy who looks 13 or 14.

The boy genius takes one look at Richard and says, “I thought you’d be younger. What are you, 25?”
“26,” Richard replies.
The software industry venerates the young. If you have a family, you're too old to code. If you're pushing 30 or even 25, you're already over the hill.
Alas, the whippersnappers aren't always the best solution. While their brains are full of details about the latest, trendiest architectures, frameworks, and stacks, they lack fundamental experience with how software really works and doesn't. These experiences come only after many lost weeks of frustration borne of weird and inexplicable bugs.
Like the viewers of “Silicon Valley,” who by the end of episode 1.06 get the satisfaction of watching the boy genius crash and burn, many of us programming graybeards enjoy a wee bit of schadenfraude when those who have ignored us for being “past our prime” end up with a flaming pile of code simply because they didn’t listen to their programming elders.
In the spirit of sharing or to simply wag a wise finger at the young folks once again, here are several lessons that can't be learned by jumping on the latest hype train for a few weeks. They are known only to geezers who need two hexadecimal digits to write their age.

Memory matters

It wasn't so long ago that computer RAM was measured in megabytes not gigabytes. When I built my first computer (a Sol-20), it was measured in kilobytes. There were about 64 RAM chips on that board and each had about 18 pins. I don't recall the exact number, but I remember soldering every last one of them myself. When I messed up, I had to resolder until the memory test passed.
When you jump through hoops like that for RAM, you learn to treat it like gold. Kids today allocate RAM left and right. They leave pointers dangling and don't clean up their data structures because memory seems cheap. They know they click on a button and the hypervisor adds another 16GB to the cloud instance. Why should anyone programming today care about RAM when Amazon will rent you an instance with 244GB?
But there's always a limit to what the garbage collector will do, exactly as there's a limit to how many times a parent will clean up your room. You can allocate a big heap, but eventually you need to clean up the memory. If you're wasteful and run through RAM like tissues in flu season, the garbage collector could seize up grinding through that 244GB.
Then there's the danger of virtual memory. Your software will run 100 to 1,000 times slower if the computer runs out of RAM and starts swapping out to disk. Virtual memory is great in theory, but slower than sludge in practice. Programmers today need to recognize that RAM is still precious. If they don't, the software that runs quickly during development will slow to a crawl when the crowds show up. Your work simply won't scale. These days, everything is about being able to scale. Manage your memory before your software or service falls apart.

Computer networks are slow

The marketing folks selling the cloud like to pretend the cloud is a kind of computing heaven where angels move data with a blink. If you want to store your data, they're ready to sell you a simple Web service that will provide permanent, backed-up storage and you won't need to ever worry about it.
They may be right in that you might not need to worry about it, but you'll certainly need to wait for it. All traffic in and out of computers takes time. Computer networks are drastically slower than the traffic between the CPU and the local disk drive.
Programming graybeards grew up in a time when the Internet didn't exist. FidoNet would route your message by dialing up another computer that might be closer to the destination. Your data would take days to make its way across the country, squawking and whistling through modems along the way. This painful experience taught them that the right solution is to perform as much computation as you can locally and write to a distant Web service only when everything is as small and final as possible. Today’s programmers can take a tip from these hard-earned lessons of the past by knowing, like the programming graybeards, that the promises of cloud storage are dangerous and should be avoided until the last possible millisecond.

Compilers have bugs

When things go haywire, the problem more often than not resides in our code. We forgot to initialize something, or we forgot to check for a null pointer. Whatever the specific reason, every programmer knows, when our software falls over, it’s our own dumb mistake -- period.
As it turns out, the most maddening errors aren’t our fault. Sometimes the blame lies squarely on the compiler or the interpreter. While compilers and interpreters are relatively stable, they're not perfect. The stability of today’s compilers and interpreters has been hard-earned. Unfortunately, taking this stability for granted has become the norm.
It's important to remember they too can be wrong and consider this when debugging the code. If you don't know it could be the compiler's fault, you can spend days or weeks pulling out your hair. Old programmers learned long ago that sometimes the best route for debugging an issue involves testing not our code but our tools. If you put implicit trust in the compiler and give no thought to the computations it is making to render your code, you can spend days or weeks pulling out your hair in search of a bug in your work that doesn’t exist. The young kids, alas, will learn this soon enough.

Speed matters to users

Long ago, I heard that IBM did a study on usability and found that people's minds will start to wander after 100 milliseconds. Is it true? I asked a search engine, but the Internet hung and I forgot to try again.
Anyone who ever used IBM's old green-screen apps hooked up to an IBM mainframe knows that IBM built its machines as if this 100-millisecond mind-wandering threshold was a fact hard-wired in our brains. They fretted over the I/O circuitry. When they sold the mainframes, they issued spec sheets that counted how many I/O channels were in the box, in the same way car manufacturers count cylinders in the engines. Sure, the machines crashed, exactly like modern ones, but when they ran smoothly, the data flew out of these channels directly to the users.
I have witnessed at least one programming whippersnapper defend a new AJAX-heavy project that was bogged down by too many JavaScript libraries and data flowing to the browser. It's not fair, they often retort, to compare their slow-as-sludge innovations with the old green-screen terminals that they have replaced. The rest of the company should stop complaining. After all, we have better graphics and more colors in our apps. It’s true -- the cool, CSS-enabled everything looks great, but users hate it because it’s slow.

The real Web is never as fast as the office network

Modern websites can be time pigs. It can often take several seconds for the megabytes of JavaScript libraries to arrive. Then the browser has to push these multilayered megabytes through a JIT compiler. If we could add up all of the time the world spends recompiling jQuery, it could be thousands or even millions of years.
This is an easy mistake for programmers who are in love with browser-based tools that employ AJAX everywhere. It all looks great in the demo at the office. After all, the server is usually on the desk back in the cubicle. Sometimes the "server" is running on localhost. Of course, the files arrive with the snap of a finger and everything looks great, even when the boss tests it from the corner office.
But the users on a DSL line or at the end of a cellular connection routed through an overloaded tower? They're still waiting for the libraries to arrive. When it doesn't arrive in a few milliseconds, they're off to some article on TMZ.

Algorithmic complexity matters

On one project, I ran into trouble with an issue exactly like Richard in "Silicon Valley" and I turned to someone below the drinking age who knew Greasemonkey backward and forward. He rewrote our code and sent it back. After reading through the changes, I realized he had made it look more elegant but the algorithmic complexity went from O(n) to O(n^2). He was sticking data in a list in order to match things. It looked pretty, but it would get very slow as n got large.
Algorithm complexity is one thing that college courses in computer science do well. Alas, many high school kids haven't picked this up while teaching themselves Ruby or CoffeeScript in a weekend. Complexity analysis may seem abstruse and theoretical, but it can make a big difference as projects scale. Everything looks great when n is small. Exactly as code can run quickly when there's enough memory, bad algorithms can look zippy in testing. But when the users multiply, it's a nightmare to wait on an algorithm that takes O(n^2) or, even worse, O(n^3).
When I asked our boy genius whether he meant to turn the matching process into a quadratic algorithm, he scratched his head. He wasn't sure what we were talking about. After we replaced his list with a hash table, all was well again. He's probably old enough to understand by now.

Libraries can suck

The people who write libraries don't always have your best interest at heart. They're trying to help, but they're often building something for the world, not your pesky little problem. They often end up building a Swiss Army knife that can handle many different versions of the problem, not something optimized for your issue. That's good engineering and great coding, but it can be slow.
If you're not paying attention, libraries can drag your code into a slow swamp and you won't even know it. I once had a young programmer mock my code because I wrote 10 lines to pick characters out of a string.
"I can do that with a regular expression and one line of code," he boasted. "Ten-to-one improvement." He didn't consider the way that his one line of code would parse and reparse that regular expression every single time it was called. He simply thought he was writing one line of code and I was writing 10.
Libraries and APIs can be great when used appropriately. But if they're used in the inner loops, they can have a devastating effect on speed and you won't know why.
This story, "7 timeless lessons of programming ‘graybeards’" was originally published by InfoWorld.

JavaScript ES6 Variable Declarations with let and const

Everyone in the JavaScript world is talking about ECMAScript 6 (ES6, a.k.a. ES 2015) and the big changes coming to objects (classsuper(), etc), functions (default params, etc), and modules (import/export), but less attention is being given to variables and how they are declared. In fact, some attention is being given, but perhaps without the right focus. I recently attended the jQuery UK conference where Dave Methvin gave a nice overview of ES6, with some great attention on let and const.

In this article I wanted to cover these two new keywords for declaring variables and differentiate them from var. And possibly more importantly, I want to identify what some folks are considering the new standard for declaring variables in ES6. The basic idea here is that let should, in time, replace var as the default declaration keyword. In fact, according to some, var should simply not be used at all in new code. The constkeyword should be used for any variable where the reference should never be changed, and let for anything else.

Replacing var with let

This workflow shift may seem dramatic at first, but not as much when we think about the differences between let and var. While var creates a variable scoped within its nearest parent function, let scopes the variable to the nearest block, this includes forloops, if statements, and others.
In the example above we create a new function (and var scope) with foo and then call the function. As expected, the last console.log() statement will produce aReferenceError because x is only defined (scoped) inside the foo() function. The first console statement will execute just fine due to variable hoisting. In this case, x will evaluate to undefined. The second console statement, however, is more interesting. In fact, both of the log(y) calls will fail because the let keyword allows much tighter scoping than var. The y variable only exists inside of that if block, and no where else! Dave Methvin calls that area before the let declaration the “Temporal Dead Zone.”
Hopefully this example illustrates the specificity of let, but you may be saying that sometimes you actually want a function-scoped variable. No problem, simply create the variable at the top of your function!
The function above declares the y variable at the top of the function, thus giving it a larger scope than our first example. We can see that y is accessible anywhere inside this function, but not outside of it, that last console.log(y) statement will produce aReferenceError. Before we move on to const, let’s reiterate our thesis: let should completely replace var in ES6. The examples above should show you that let is more powerful, while still allowing almost all of the flexibility of var. I’m not the first one to say it, but I’m a believer now.

Constant Reference, Not Value

The other new keyword for variable declaration in ES6 is const, but it is often misinterpreted as being a “constant value”. Instead, in ES6 a const represents aconstant reference to a value (the same is true in most languages, in fact). In other words, the pointer that the variable name is using cannot change in memory, but the thing the variable points to might change.
Here’s a simple example. In the code below we create a new variable with a constant reference to an Array. We can then add values onto the array and since this does not change the reference, everything works:
However, if we try to change the variable reference to a new array – even to one with the same contents – we will get a SyntaxError (“Assignment to constant variable”):
Of course, if you have a const that points to a primitive such as a string or number, then there really isn’t anything to change about that value. All methods on String and Number return new values (objects).
The last note about using const is that it follows the same new scoping rules as let! This means that between let and const we should be able to complete replace var in our code. In fact, there are many folks supporting the idea of only “allowing” var to be used in legacy code that hasn’t been touched. When a developer jumps into a file to update some code, they could (and possibly should) be updating all var statements tolet or const as appropriate, with proper scoping.

But that’s just for ES6…

It’s true. The new let and const keywords are not available in ES5, and thus in most execution environments. However, with good transpilers such as Babel we can compile our ES6 JavaScript into runnable ES5 code for deployment to a browser environment.
Luckily for us Node.js (and io.js) developers we don’t have to worry about what browser someone is executing our JavaScript code in! If you’re using Node v0.12 (you are, right?), you can have access to these features with two small changes. First, you have to run your code with “harmony” features enabled (the original codename for ES6 was “harmony”):
The second change is that any code using let or const (or any other ES6 feature) must be in strict mode. To do so, simply place "use strict;" at the top of every module. Alternatively, you could use the --use-strict flag on the CLI, but that may be a bit much.
In io.js you don’t need the --harmony flag because all of those features are being rolled right into the code. However, you do still need to make your code strict. Again, this can be done by simply placing a "use strict"; statement at the top of your module files.
1.0.0Are you using io.js? Wondering if StrongLoop supports it? We do! In addition to StrongLoop’s API platform being able to run on io.js, we also provide for our customers running on io.js.
Now go forth, and create a better variable declaration workflow!