Friday, April 3, 2015

7 reasons why frameworks are the new programming languages

In the 1980s, the easiest way to start a nerd fight was to proclaim that your favorite programming language was best. C, Pascal, Lisp, Fortran? Programmers spent hours explaining exactly why their particular way of crafting an if-then-else clause was superior to your way.

Busted: Still fighting about pointers
That was then. Today, battles involving syntax and structure are largely over because the world has converged on a few simple standards. The differences between the semicolons, curly brackets, and whatnot in C, Java, and JavaScript are minor. Interesting debates about typing and closures still exist, but most are moot because automation is closing the gap. If you don't like specifying a data type, there's a good chance the computer will be able to infer exactly what you meant. If your boss wants JavaScript but you like Java, a cross-compiler will convert all of your statically typed Java into minified JavaScript, ready to run in a browser. Why fight when technology has our backs?
Today, the interesting action is in frameworks. When I sat down with other faculty members at Johns Hopkins University to plan out a new course, frameworks dominated the conversation. Is Angular better than Ember? Is Node.js all that?
This was the center of the action, worthy of a survey course that would explore the architecture of the most important software packages girding today’s Internet.
In this sense, frameworks are the new programming languages. They are where the latest ideas, philosophies, and practicalities of modern-day coding are found. Some flame out, but many are becoming the new fundamental building blocks of programming. Here are seven facets fueling the framework trend -- and making frameworks the new favorite hotbed for nerd fights.

Most coding is stringing together APIs

There was a time when writing software meant deploying all of your knowledge of the programming language to squeeze the most out of the code. It made sense to master the complexity of pointers, functions, and scope -- the quality of the code depended on doing the right thing. These days automation handles much of this. If you leave worthless statements in the code, don't worry. The compiler strips out dead code. If you leave pointers dangling, the garbage collector will probably figure it out.
Plus, the practice of coding is different now. Most code is now a long line of API calls. There's occasional reformatting of the data in between API calls, but even those jobs are usually handled by other APIs. A lucky few get to write clever, bit-banging, pointer-juggling code for the guts of our machines, but most of us work with the higher layers. We simply run pipe between APIs.
Because of this, it's more important to understand how an API behaves and what it can do. Which data structures does it accept? How do the algorithms behave when the data set grows larger? Questions like these are more central to today’s programming than ones about syntax or language. Indeed, there are now a number of tools that make it simple to call a routine in one language from another. It's relatively simple to link C libraries to Java code, for instance. Understanding the APIs is what matters.

The shoulders of giants are worth standing on

Imagine you've become a disciple of Erlang or another new language. You decide it offers the best platform for writing a stable, bug-free app. This is a nice sentiment, but it could take years for you to rewrite all the code available for Java or PHP into your latest language of choice. Sure, your code could turn out to be dramatically better, but is that worth the extra time?
Frameworks let us leverage the hard work of those who came before us. We may not like the architecture they chose and we may argue over implementation details, but it's more efficient to stifle our complaints and find a way to live with the differences. It's so much easier to inherit all the good and the bad of the code base through a framework. Taking the macho route by writing everything yourself in your favorite new language rather than one of its more popular frameworks won’t allow you to enjoy the cream of your new choice as quickly as it would to simply defer to the framework makers and their APIs.

Knowing the architecture is what matters, not the syntax

When most of the coding is stringing together API calls, there's not much advantage in learning the idiosyncrasies of the language. Sure, you could become an expert on how Java initializes static fields in the objects, but you would be much better off figuring out how to leverage the power of Lucene or JavaDB or some other pile of code. You could spend months grokking the optimizing routines of Objective-C compilers, but learning the ins and outs of the latest Apple core library will really make your code scream. You'll get much further learning the picky details of the framework than the syntax of the language on which the framework rests.
Most of our code spends most of its time in the inner loops of libraries. Getting the details of the language correct can help, but knowing what's going on in the libraries can pay off dramatically.

Algorithms dominate

Learning a programming language can help you juggle the data stashed in the variables, but that only takes you so far. The real hurdle is getting the algorithms correct, and those are usually defined and implemented by the frameworks.
Many programmers understand it's dangerous and wasteful to spend time re-implementing standard algorithms and data structures. Sure, you might be able to tune it a bit to your needs, but you risk making subtle mistakes. Frameworks have been widely tested over the years. They represent our collective investment in a software infrastructure. There aren't many examples of when it makes sense to “go off the grid,” toss aside the hard work of others, and build an algorithmic cabin with your own two hands.
The right approach is to study the frameworks and learn how to use them to your best advantage. If you choose the wrong data structure, you could turn a linear job into one that takes a time that's a quadratic function of the input size. That's a big hassle once you go viral.

Compilers and smart IDEs correct your syntax

Am I supposed to put a semicolon after the last statement in a block? Is the semicolon a "separator" or a "terminator"? Language designers have spent a long time crafting parsers that enforce these rules and -- guess what -- I don't care. There was a time a decade or so when I did care, but now the IDEs do the work for me. They're constantly watching my back and telling me when I screw up. I let them do the thinking for me and spend my time pondering the big questions about my code. The IDE is the peon, the programming assistant that handles those petty details.
Automation has saved us from the tedium of programming syntax. Oh sure, they don't do everything for us. We still need to have a vague idea of which punctuation to deploy. But most of the time, the details about the languages don't matter.
The IDEs also help with frameworks, but only the little details. They'll remind us the parameters for the function call, and they'll even check to see whether the data is the right type. After that, we're supposed to know which functions to use and how to plug them together. This is where our mind focuses when the syntax doesn't matter so much -- toward the higher-level methods and functions that will help surface solutions more expediently.

Syntax is disappearing with visual languages

While this has been predicted for many years, it's slowly happening with some -- though not all -- code. Some programming continues to be very textual, but some is becoming more visual, which means the underlying computer language doesn't matter as much.
GUI builders are the easiest places to see this. You can drag and drop user interface widgets all day and night without worrying about whether it's C or Java or anything else. The details are coded in visual boxes.
Tools like AndroidBuilder make it possible to drag and drop much of the layout, and AndroidBuilder will dutifully write the XML and Java stubs needed to make the code work. It's hard to argue that visual languages are going to be the future, especially after they failed repeatedly to realize the prophecy, but the tools are growing more visual when they can be. This means languages are a bit less powerful or important.

Code is law

Computer languages are largely agnostic. They're designed to be open, accepting, and almost infinitely malleable. They're meant to do whatever you want. Sure, sometimes you need to use a few extra characters because of the syntax, but those are merely keystrokes. After that, it's mainly if-then-elses, plus occasional clever bits. All of the language will still help you get the results you want the way you want to get them. If there are strictures, they're designed to keep your code as bug-free as possible, not limit what you can do.
Frameworks are where the power lies. This is where architects can decide what is allowed and what is inherently forbidden. If the architect doesn't want something to happen, the magic function call is missing from the API. If the architect likes the idea, there are usually multiple function calls and plenty of supporting tools. This is why Larry Lessig, the Harvard law professor, likes to say, "Code is Law."
The frameworks establish the rules for their corner of the Internet and you must live within them once you choose them. Some blogging platforms encourage linking with others through AJAX calls and some don't support them. That's why you must investigate carefully and choose wisely. It's ultimately why frameworks dominate every part of our lives, even those few moments when we're not programming.
This story, "7 reasons why frameworks are the new programming languages" was originally published by InfoWorld.

Real-World JavaScript Performance Tips

By 
Reposted from the New Relic Blog 
JavaScript frameworks can be a blessing and a curse–a fact that New Relic has become intimately familiar with as we continually work to improve the New Relic APM UI. We’ve chosen to use Angular.js as our frontend framework of choice and have loved the way that it makes our Web applications feel like they’re running native code. But we’ve discovered that without close attention to performance tuning, Angular can bring a browser to its knees.
We got some firsthand experience tackling huge JS and Angular performance hurdles while improving the application summary table functionality. The usual Angular performance suggestions helped somewhat, but we got even larger improvements by revising our application architecture and JavaScript practices. Here’s how we did it, along with some of the challenges that came up along the way.


Pushing server-side logic to client-side

There are a number of reasons to push application logic to the JS layer, the biggest being that letting JS do the work of massaging the data for presentation in the view layer boosts app server performance. In the case of our app, we converted our Rails endpoints to serve raw JSON and eliminated unnecessary view processing.
This improved our user experience so that customers can now seamlessly navigate among all the applications on their account, quickly finding the ones relevant to the task at hand. This goes hand-in-hand with loading and managing a large amount of data on the client side. With a small to moderately sized application, this isn’t a problem. But many of our customers monitor hundreds or thousands of applications that need to have their data updated in near-real-time.
We initially developed the page using our own account’s data, at which point everything seemed to be working smoothly. Then, close to the end of the project we loaded an account with lots of applications (around 3,000!). It was like that spinning gif was judging us!

Challenge 1: Data binding large amounts of data with Angular isn’t always optimal

Since this was our first big Angular application in APM we had some idea that using Angular’s ng-repeat with lots of data could cause a “dip” in performance. There has been a lot written about this phenomenon, so I won’t dwell on how we dealt with large numbers of watchers here. In fact the release of Angular 1.3 introduced one-time binding syntax — double colon {{::name}} — that addresses this very problem.)
At the time Angular hadn’t released its one-time binding, so we used Bindonce. However, reducing the number of watchers didn’t solve all of our problems. We still were showing our users a lot of applications in the table, raising the bigger question of, “Is it really helpful to show that much data at once?” By committing to better solutions for navigating through applications, such as pagination and filters, we were able to confidently cut down on amount of data bindings on the screen at any given time.

Challenge 2: Garbage collection

We went back to a smaller data set and profiled the page in the browser’s dev tools. What we found was that our JS app was spending a lot of time in garbage collection (GC). This meant we had a memory usage problem.
jsperf
In case you aren’t familiar with JS GC issues, a saw tooth shape like the one shown in the Used JS Heap line (blue) above indicates a high rate of short-lived objects being created that are then de-allocated by the browser’s memory. When your rate of GC is high you begin to experience “jank” or possibly even kill the page entirely. Understanding why we were experiencing such a large number of GC events requires understanding something about the structure of the JS app.
New Relic is in the business of keeping users in the know about the current state of their applications. For example, if the current view is ordered by application health and an application that was in good health suddenly experiences problems, we need to make sure that change becomes visible. In order to keep the data fresh for the JS frontend, we basically set a timer and periodically update the account’s applications in the background. When the data comes back from the server we need to process the response and display changes.
Our original approach was to completely reassign the variable with a new array with all of the fresh application data. This approach was dead simple to code in Angular but, obviously, if you’re worried about object references being allocated and de-allocated, it’s too heavy handed. We were simply throwing away the old array of applications. Queue the garbage collectors!
So we essentially came up with a homegrown in-memory cache of all the customer’s application data. We no longer build up and tear down arrays after each successful response from the server. In order to do this we keep a reference map using the application’s ID to tell us where it is in the cache. When we iterate over the new data we look up each application and update its properties. We return a reference to the cache to be used in controllers and/or directives. This created complexity since it’s extremely important to never reassign the cache variable instance once it’s been initialized.
It works kind of like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
angular.factory("Application", function($http){
  // We found in benchmarking that $resource was slower than bare $http
  this.get = function(){
    return $http("url/to/application/data", {params: anythingNeeded});
  }
});
 
angular.service("ApplicationCache", function($timeout, Application){
  var cache       = [],
      cacheMap    = {},
      timerDelay  = 1 * 60 * 1000; // 1 minute
 
  // Return the cache array reference that you can bind to in a controller or directive.
  // This adds complexity since you should NEVER break the reference by reassigning
  // the cache variable.
  this.all = function(){
    return cache;
  };
 
  // Convenient lookup helper
  this.find = function(id){
    var applicationIndex = cacheMap[id];
    return cache[applicationIndex];
  };
 
  var fetch = function(){
    Application.get()
      .success(updateCache)
      .error(someErrorHandlingFunctionOfYourChoosing)
  };
 
  var updateCache = function(applicationData){
    var application,
        availableLocation = cache.length;
    for(var i = 0, len = applicationData.length; i < len; i++){
      application = applicationData[i];
      existingLocation = cacheMap[application.id];
 
      if (existingLocation !== undefined) {
        updateCachedAttributes(application, existingLocation);
      } else {
        addNewApplication(application, availableLocation);
        cacheMap[application.id] = availableLocation++;
      }
    }
  };
 
  var updateCachedAttributes = function(applicationData, location){
    var application = cache[location];
    for(var attr in application){
      application[attr] = applicationData[attr];
    }
  };
 
  var addNewApplication = function(applicationData, location){
    var application = new Application(applicationData);
    cache[location] = application;
    cacheMap[application.id] = location;
  };
 
  $timeout(fetch, timerDelay);
})
Here’s how the memory usage looked in the profiler after implementing the cache and changing how we used arrays:
jsperf 2

Challenge 3: Use array best practices.

Even after we solved the problem of making too many objects we didn’t need, the page still wasn’t as snappy as we wanted it. Every time a label was applied or a user resorted the table there was a noticeably slow response. After reading a few great blog posts andJSPerf tests we began benchmarking our array iterators and completely overhauled how we were doing things:
  • We stopped using myArray.push() in favor ofmyArray[myArray.length] = somethingNew
  • We started preallocating the array when we knew the length of the datamyArray = new Array(knownLengthOfData)
  • We rewrote our iterators and sorting algorithms. We needed to know the intersection of arrays for filtering with our labels. We ran a JSPerf test on different iterators and algorithms and used the best one.

Conclusion

As front-end applications replace more and more heavy server-side computing, the chances of writing poorly performing JavaScript becomes greater. Especially when the tools, like Angular, make it really fast to create a slick UI. The larger the data gets the more important it becomes to pay attention to Garbage Collection in your devtools, be mindful about object creation, and use the fastest iterators you can find.


About the author

Katie is a Jill of all trades with a passion for making people’s lives better through design and craftmanship. Pursuing software was the next not so seemingly logical step after having worked in fields ranging from architecture to heavy construction. She currently is part of the APM team at New Relic writing Ruby and Javascript. When not building something, she’s exploring the forest in the PNW or on the hunt for good food and drink.