Saturday, March 21, 2015

Open source jobs: It's a seller's market

The Linux Jobs Report published by the Linux Foundation this week amplifies a development many of us have already discovered from experience: Open source infrastructure skills are in high demand. Together with recruitment company Dice, the Linux Foundation's survey of 4,456 people found that nearly all the hiring managers interviewed had jobs available for “Linux professionals” -- a catch-all description embracing both developers and system administrators.

Suse, for example, told me it has 66 vacancies at present and will be creating at least 150 new positions this year, all associated with open source software. The company said it will need more than Linux developers; it is hunting for OpenStack developers, Docker specialists, and engineers to build distributed storage software too.
Suse isn’t alone. The Jobs Report highlights open source cloud computing skills as a priority across those surveyed, with 42 percent of hiring managers seeking OpenStack and CloudStack experience. In addition, the prevalence of open source tools for security is clear from the 23 percent seeking skills in that area.
With a new certification product line, the Linux Foundation was keen to promote certification as a prerequisite for hiring. But Marie Louise van Deutekom, Global HR director for Suse, told me:
We look at many aspects when hiring, including experience, contributions, and fit into the SUSE culture. Certifications is one aspect, but certainly not the only one.
Professionals I approached directly for comment were even more skeptical. One told me he wouldn't bother interviewing for a job that required certifications; he was interested only in jobs that recognized hands-on experience as valuable. Another told me:
They should be seen as naught but a crutch for HR people and are an extremely poor filter. They reward the wrong thing, so they select the wrong candidates.
While the the better certifications like those offered by Red Hat or The Linux Foundation involve skills tests rather than multiple choice tests (dismissed out of hand as worthless by the professionals I approached), the best passport to a new job is undoubtedly experience. Joining an open source project to help develop its code -- and remaining part of the community while using that knowledge on the job -- allows newcomers to develop that track record. The best certifications are likely to be those issued by open source communities themselves.
In a job market where 88 percent of hiring managers report it’s “very difficult” or “somewhat difficult” to find Linux-savvy staff, certifications won’t be high on the list of priorities. Only bureaucracies think otherwise. With open source infrastructure increasingly built from open source parts, it’s no surprise to find that the job market is a seeker’s market and not a hirer’s market. Until that situation reverses, it will be proven experience and not certifications that closes the deal.

This story, "Open source jobs: It's a seller's market" was originally published by InfoWorld.

How to Land a Software Engineering Job?

The other day I read this piece by David Byttow on “How to land an engineering job”. And I don’t fully agree with his assertions.

I do agree, of course, that one must always be writing code. Not writing code is the worst that can happen to a software engineer.
But some details are where our opinions diverge. I don’t agree that you should know the complexities of famous algorithms and data structures by heart, and I do not agree that you should be able to implement them from scratch. He gives no justification for this advise, and just says “do so”. And don’t get me wrong – you should know what computational complexity is, and what algorithms there are for traversing graphs and trees. But implementing them yourself? What for? I have implemented sorting algorithms, tree structures and the likes a couple of times, just for the sake of it. 2 years later I can’t do it again without checking an example or a description. Why? Because you never need those things in your day-to-day programming. And why would you know the complexity of a graph search algorithms if you can look it up in 30 seconds?
The other thing I don’t agree with is solving TopCoder-like problems. Yes, they probably help you improve your algorithm writing skills, but spending time on that, rather than writing actual code (e.g. as side-projects) to me is a bit of waste. Not something you should not do, but something that you don’t have to. If you like solving those types of problems – by all means, do it. But don’t insist that “real programmers solve non-real-world puzzles”. Especially when the question is how to get an software engineering job.
Because software engineering, as I again agree with David Byttow, is a lot more than writing code. It’s contemplating all aspects of a software system, using many technologies and many levels of abstraction. But what he insists is that you must focus on lower levels (e.g. data structures) and be expert there. But I think you are free to choose the levels of abstraction you are an expert in, as long as you have a good overview of those below/above.
And let’s face it – getting an engineering job is easy. The demand for engineers is way higher than the supply, so you have to be really incompetent not to be able to get any job. How to get an interesting and highly-paid job is a different thing, but I can assure you that there’s enough of those as well, and not all of them require you to solve freshman year style problems on interviews. And I see that there is this trend, especially in Silicon Valley, to demand knowing the computer science components of software engineering by heart. And I don’t particularly like it, but probably if you want a job at Google or Facebook, then you do have to know the complexities of popular algorithms, and be able to implement a red-black tree on a whiteboard. But that doesn’t mean every interesting company out there requires those things, and does not mean that you are not a worthy engineer.
One final disagreement – not knowing exact details about the company you are applying at (or is recruiting you), is fine. Maybe companies are obsessed with themselves, but when you go to a small-to-medium sized company that does not have world-wide fame, not knowing the competition in their niche is mostly fine. (And it makes a difference whether you applied, or they headhunted you.)
But won’t my counter-advise land you a mediocre job? No. There are companies doing “cool stuff” that don’t care if you know Dijkstra’s algorithm by heart. As long as you demonstrate the ability to solve problems, broad expertise, and passion about programming, you are in. That includes (among others) TomTom, eBay, Rakuten, Ericsson (those I’ve interviewed with or worked at). It may not land you a job at Google, but should we focus on being good engineers, or on fulfilling Silicon Valley artificial interview criteria?
So far I’ve mostly disagreed, but I didn’t actually give a bullet-point how-to. So in addition to the things I agree with in David’s article, here’s some more:
  • know a technology well – if you’ve worked with a given technology for the past year, you have to know it in depth; otherwise you seem like that guy that doesn’t actually know what he’s doing, but still gets some of the boilerplate/easy tasks.
  • show that software engineering is not a 9-to-5 thing for you. Staying up-to-date with latest trends, having a blog, GitHub contributions, own side projects, talks, meetups – all of these count.
  • have broad expertise – being just a “very good Spring/Rails/Akka/…” developer doesn’t cut it. You have to know how software is designed, deployed, managed. You don’t need to have written millions of lines of CloudFormation, or supported a Puppet installation by yourself, but at least you have to know what infrastructure and deployment automation is. (Whew, I managed to avoid the “full-stack” buzzword)
  • know the basics – as pointed out above, you don’t have to know complexities and implementations by heart. But not knowing what a hashtable or a linked list is (in terms of usage patterns at least) hurts your chances significantly. Knowing that somethings exists when you need it is the practical compromise between knowing how to write it and not having the faintest idea about it.
  • be able to solve problems – usually interviewers may usually ask a hypothetical question (in fact, one that they recently faced) and see how you attack the problem. Don’t say you don’t have enough information or you don’t know – just try to solve it. It may not be correct, but a well-thought attempt still counts.
  • be respectful. That doesn’t mean overly-professional or shy, but assume that the people interviewing you are just like you – good developers that love creating software.
That won’t guarantee you a job, of course. And it won’t get you a job at Google. But you can land a job where you can do pretty interesting things on a large scale.

Published at DZone

Facebook nets billions in savings from Open Compute Project

Facebook, by adhering to the Open Compute Project it founded in October 2011, has saved more than $2 billion over the past three years, a company official said Tuesday at the Open Compute Summit Conference in Silicon Valley.

The Open Compute Project (OCP) began as an effort to reduce Facebook's hardware costs, and since then, Vice President of Engineering Jay Parikh said the company has tracked more than $2 billion in savings via optimizations to its data center, software, and network. “The bottom line for us is actually pretty large,” with the company working on efficiency as a first principal, Parikh said.
In the past year, designs compliant with OCP have produced enough energy savings to power 80,000 homes for a year, according to Facebook. Carbon emission reductions have been about 400,000 metric tons, equivalent to taking 95,000 cars of the road for a year as a result of OCP-related measures.
“This stuff really does matter when you think about this optimization.” Leveraging OCP, Facebook gets flexibility and saves money and energy in building out infrastructure, according to Parikh.
The project has been intended to produce more efficient server, storage, and data center hardware designs, in a model mimicking the open source software movement. Facebook has contributed ideas and designs to OCP ranging from mechanical and electrical systems in the data center to server designs, Parikh said. The company made several announcements Tuesday related to the project, including the Yosemite SoC compute server it has been working on with Intel, intended to dramatically increase speed while lowering the cost of serving Facebook traffic.
The company also proposed a specification for its top-of-rack network switch, Wedge. Facebook is working with the likes of Accton and Broadcom on a Wedge product for the Open Compute Project community, with Accton to ship Wedge in the first half of this year.
Major backers -- including Intel, Hewlett-Packard, Microsoft, and Canonical -- are appearing at this week’s conference.

This story, "Facebook nets billions in savings from Open Compute Project" was originally published by InfoWorld.

Red Hat formulates a plan for building enterprise mobile apps

"The whole Web architecture is giving way to an emerging mobile architecture," said Cathal McGloin, Red Hat vice president of mobile platforms.
Like IBM and Oracle, Red Hat has been working to extend its enterprise software portfolio so it can support mobile applications as well, particularly those that its customers develop in house.

The company said Tuesday that it has completed integrating into its own software portfolio the mobile platform it acquired when it purchased FeedHenry last October. It has outlined how enterprises could use these technologies to build mobile applications.
About 51 percent of organizations surveyed by IT analyst firm 451 Research are increasing their budgets for mobile development this year. Many face challenges, given that traditional software development methods don't work well for the rapidly evolving world of mobile development.
Different mobile devices demand different user interfaces. Users are expecting mobile applications to be easier to use.  Mobile apps must also evolve more quickly to stay abreast with the competition.
For enterprises, developing mobile applications for either customers or employees can be a demanding task, especially when the programs need to be seamlessly connected with complex back-end systems.
Red Hat wants to help bridge the worlds of mobile apps and back-end systems of record.
"The role of IT is to introduce new agile development technologies to complement the ability to run existing systems," said McGloin, who is also the former CEO of FeedHenry.
The FeedHenry mobile platform is designed to reduce the work needed to maintain mobile applications, including tasks around data synchronization, caching and security.
For mobile-based cloud services, the company has established a single architecture based on a set of REST (Representational State Transfer) APIs (application programming interfaces), allowing different applications to communicate with one another.

Red Hat understands that developing a mobile application is not the same as building one for the desktop, which is why the company has augmented its software stack with new technologies for mobile development.

Red Hat's integrated development environment (IDE), JBoss Developer Studio, can be used to create apps that run on FeedHenry.
The FeedHenry platform has been augmented with additional tools for mobile Application Lifecycle Management (ALM) and collaboration, allowing software development teams to rapidly iterate through new releases of a mobile app.
Red Hat has also teamed the FeedHenry software with its own set of platform services, OpenShift, allowing organizations to run their mobile apps within a cloud service.
This integration also allows enterprise customers to run mobile apps from their own private clouds.
A portion of Red Hat's customers are already using the mobile technology to build mobile applications, McGloin said, including companies in the industries of manufacturing, transportation and workforce management.

This story, "Red Hat formulates a plan for building enterprise mobile apps" was originally published by InfoWorld.

Scriptr: Write your Internet of things in JavaScript

Scriptr, the company behind the the scripting engine launched this week, is looking to link developers to the Internet of things.

internet of things rev
Combing cloud accessibility with the use of JavaScript,, or simply scriptr, enables developers to easily connect devices to the Internet. The company says that the IoT has expanded opportunities for developers but presents logistical challenges, such as devices constrained by limited processing and memory, making it difficult to code complex integrations and business logic. Scriptr attempts to solve these problems via cloud-based business logic and Web services, leveraged through a browser-based IDE. Developers can build custom APIs without dealing with server and application stack management.
"Everything you create under scriptr becomes a secure Web services API," Scriptr CEO Rabih Nassar said in an email. "Within each script, you can invoke any number of arbitrary third-party Web-services to create mashups -- or orchestrations -- of cloud services. An IoT developer would use scriptr to create the back-end services needed to support the business logic and orchestration needs of his application, and then invoke those APIs from his device." Current use cases from existing and potential customers have included industrial heavy machinery monitoring, wearables, and smart cities, said Nassar.
On the enterprise side, scriptr. is part of the data services exchange platform, Nassar said. "This provides scalable API access from inside scriptr to a growing list of enterprise cloud services, including big data, streaming databases, enterprise reporting, M2M, home automation platforms, etc."
Also in the cloud vein, Scriptr will offer a partner service, called, that abstracts the integration of popular consumer cloud platforms. An abstraction layer already is provided to integrate with Twitter, Facebook, Trilio, and Android and iOS push notifications
Scriptr envisions a lot of prototyping via a free version of the platform. "Our paying customers get their dedicated private cloud implementations. We charge based on the size of the dedicated server clusters and usage," Nassar said. In addition to basic scripting, higher-level constructs are planned that simplify development of server-side business logic, atomic-business rules, API mashups, and finite state-machines.

This story, "Scriptr: Write your Internet of things in JavaScript" was originally published by InfoWorld.