Wednesday, August 31, 2016

Advanced Circuits With Printed Circuit Boards Available on 90-day Limited Warranty

Those who are not familiar with printed circuit boards may not be aware that they are used in all kinds of electronic products, as they have a non-conductive substrate onto which is etched copper clad laminated board material that connects electronic components. Into the substrate are embedded capacitors, resistors or active devices. Who uses PCBs? Well, they’re manufactured for Aerospace, Military/Defense, Medical and Commercial Industries. Who manufactures them? There are plenty companies who provide printed circuit boards and Advanced Circuits is one of them. Actually, it’s the third largest PCB manufacturer in the world and its customers have only words of praise.
Let’s start with a short story of the beginnings of Advanced Circuits. The company was founded by Seiko Circuits in 1979, but ten years later it was going out of business and Ron Huston, which had a degree in electrical engineering was called by Paul Bustabade, who convinced him to buy the dying business. Together, they built an empire almost from scratch, or from a 5,000-square-foot garage that didn’t even have a computer or fax machine.
It was extremely hard for them in the first years, barely keeping afloat, and customers were promised 1 to 2 percent discount if they made the payments within ten days. Huston and Bustabade didn’t settle for less and they tried their luck in 1992, when they sent 5,000 brochures to potential customers with the promise to deliver products for free and on time. Also, credit cards were accepted and in short time, they started receiving orders that required fresh workforce. Advanced Circuits hired the first sales associate, then after breaking down the manufacturing of PCBs to 20 processes, things become more complex and the company started to make good money, especially after internet ordering began in 1998.
In March 2008, Advanced Circuits, with its headquarters in Aurora, CO 80011, United States (21101 E 32nd Pkwy) announced that it was named, for the ninth consecutive year, the fastest growing technology companies in Deloitte’s Colorado Technology Fast 50 Program. Its first purchase was Circuit Board Express, which was founded in 1993, in Haverhill, Massachusetts and was a pioneer in the quickturn PCB business. The company continued to acquire Circuit Express (CEI) based in Tempe, AZ and later it received MIL-PRF-31032 Certification (US Government’s technology-specific standard for high-reliability printed circuit manufacturing).
Another important purchase was Universal Circuits, based in Maple Grove, MN, and in January 2013, Advanced Circuits was announcing PCB manufacturing expansion, offering more jobs at its Aurora Corporate Headquarters. In September 2015,
Advanced Circuits Tempe Division completed IPC PCQR2 submission and two months later, Coastal Circuits became the property of Advanced Circuits.
First time customers who type in “500” in the registration code box when registering will get $500 off PCBs, 50% off first order up to $250 or 50% off second order up to $250. Advanced Circuits has also additional Specials such as Monthly Specials, Bare Bones, Weekend Wonders, Engineering Student Specials, $33 Each - 2 Layer Full Spec Boards and $66 Each - 4 Layer Full Spec Boards. Moreover, customers who receive damaged boards or if the boards get damaged within 90 days from date of shipment, Advanced Circuits will replace the bare board at no charge.
What are your thoughts about Printed Circuit Boards? Please do let us know in the comments below and we will come back with replies, or maybe include your own quoted replies in this article.

7 bad programming ideas that work

Anyone who has listened to a teenager, sports commentator, or corporate management knows the connection between words and meaning can be fluid. A new dance craze can be both “cool” and “hot” at the same time. A star player’s “sick moves” don’t necessarily require any medical attention. And if a company is going to “reorganize,” it’s not good for anyone, except perhaps the shareholders -- even then it’s not always clear.


The computer world has always offered respite from this madness. No one stores “one” in a bit when they mean “zero.” No one types if x = 0 when they really want to say if x != 0. Logic is a bedrock that offers stability in a world filled with chaos and doublespeak.

Alas, when you get above the ones and zeros, even computer science isn’t so clear. Some ideas, schemes, or architectures may truly stink, but they may also be the best choice for your project. They may be cheaper or faster, or maybe it’s too hard to do things the right way. In other words, sometimes bad is simply good enough.

There are also occasions when a bad idea comes with a silver lining. It may not be the best approach, but it has such good side-effects that it’s the way to go. If we’re stuck going down a suboptimal path to programming hell, we might as well make the most of whatever gems may be buried there.

Here are seven truly bad programming practices that all too often make perfect sense.

Quick and dirty code

Few things can kill a project like poorly designed and poorly documented code. Even if your slapped-together stack stands up, maintaining it will be a nightmare. And anyone who inherits your steaming pile of bits will be a sworn enemy for life.

Then again, quick and dirty code has its place. In fact, the pursuit of clean, thoroughly engineered code can be costly. I once worked with a programming manager who looked everyone in the eyes and cited a long list of ways that our software needed to be cleaned up. Our old code may have been running, but the company would dole out another two months of budget to make sure the lines of code were presentable. Sometimes our programming manager would ask for six months’ worth of budget and she would get it. After all, who wanted unclean code?

Not everyone has access to that kind of budget, and few want to tell budgeting managers to cough up another N months of developer time because clean code is best.

Worse, cleanliness is often a slippery concept. One programmer’s clean code is another’s overly complex code. Often cleaning up code means adding new abstractions and layers that make it clear what the code is doing. This can quickly become TMI. Sometimes quick and dirty code is better than complex “clean” code that is paired with documentation on par with "War and Peace."

Anyone who has waded through pages and pages of well-engineered code with layers and layers of abstractions knows that sometimes a bit of code with one simple input and one simple job description is better than a masterful pile of engineering and computer science. Junky but functional code can take 10 seconds to understand -- sophisticated architectures can take weeks.

It’s not that doing a good job is a bad thing -- however, many times no one has the time or energy to unwrap all of the sophistication. When time is in short supply, sometimes quick and sloppy win -- and win big.

Wasteful algorithms

Sometimes being smart isn’t worth the price. Nowhere is this more evident than when it comes to thoroughly studied algorithms with strong theoretical foundations.

Everyone knows the lessons from college. A smart data structure will do the job in time proportional to the size of the data. A bad one might get slower in time proportional to the square or even the cube of the number of data elements. Some of the truly horrible get exponentially slower as the amount of data grows. The lessons from computer science class are important. Bad algorithms can be really slow.

The problem is that smart, theoretically efficient algorithms can be slow too. They often require elaborate data structures full of pointers and caches of intermediate values, caches that chew up RAM. They can take months or years to get right. Sure, in the long run they’ll be faster, but what is it that economist John Maynard Keynes said? “In the long run we’re all dead.”

Part of the problem is that most of the theoretical models analyze how algorithms behave when the data set grows very large. But even in the era of big data, we may not be dealing with a data set that’s large enough to enjoy all of the theoretical savings.

In these cases, it might be a good idea to toss together a sloppy algorithm, even if it’s potentially slow.

Using a separate database server

When it comes to software performance, speed matters. A few milliseconds on the web can be the difference between early retirement and a total flop. The common wisdom goes: To speed up communications between the layers of your software, put your database on the same machine as the software for packaging the results for the user. With your database code and presentation layer communicating quickly, you eliminate the latency of having to ping a separate machine.

Except it doesn’t always pay off, especially when the single machine can’t efficiently serve the needs of both the presentation and the database layer.

Machines that do a great job running databases are often much different from those running presentation software. To further complicate matters, the differences depend on the type and structure of database you are using. More RAM always helps, but it’s essential when indexes are involved. Big tables need much more RAM than a large number of little ones. If you plan to do many JOINS, you might be better off bucking the all-in-one trend and going with a separate database server.

When you put the database and the rest of the software together under one hood, that one machine is forced to be a jack-of-all-trades. It may be able to communicate with itself quickly, but it can’t be tuned to efficiently perform each of your code’s various tasks.

Using a big CMS hammer on a tiny nail

One of today’s trends is to strip down the work of a central hub and split it up to run as lightweight microservices. Instead of building one portal to all your data, you build dozens or perhaps hundreds of separate web services dedicated to answering specific queries and collecting specific data. Doing so allows you to create, debug, and deploy each service independently -- great for making iterative changes without having to upgrade a monolithic code base.

But using a big, fat content management system like WordPress or Drupal to do the same thing is another way to serve up JSON or XML data with a bit of reconfiguration. This may seem like a terrible idea at first glance, as the extra complexity of the CMS can only slow down the stack. But a CMS approach can also speed development and improve debugging. All of the data formatting and “content management” can serve the internal staff who are managing the system. Even if no users touch the fancy layers, it can still be a big help for the internal audience.

The extra overhead may be a pain, but it’s relatively easy to solve by adding more computing power to the back end.

Integrating display with data

One of the cardinal rules of modern design is to split your project into at least three parts: data storage, decision making, and presentation. Such separations make it simpler to redesign any one part independently of the other two.

There are downsides, though, because separating the display from the data means that the application is constantly reprocessing the data to fit the current template for the display. Much of this is repeated if the template remains the same.

Lately, architects have been reworking data formats to make it easier for display code to process. The move to JSON data structures and JSON NoSQL databases is largely driven by the desire to deliver data in a format that is simpler for the browser to process. It’s not exactly mixing data with display code, but it’s moving them closer together.

Using a cache is often how the applications mix display code with the data. When data is mixed into the template, the result is stored back in the database to be served again and again.

Using a suboptimal foundation

It used to be that choosing the “wrong” architecture or strategy for your long-term growth goals meant imminent project death. These days, however, recovering from poor early choices can be relatively easy, as long as throwing more cloud machines at the problem remains a workable solution.

If your server stack is slow or your databases are getting bogged down, you can often simply turn up the dial and rent more machines. Then when the crowds dissipate, you can dial back the extra computing power. When extra machines cost mere pennies per hour, it’s no longer as catastrophic to make an architectural mistake.

Of course, not all errors can be fixed by throwing pennies at them. Some poor decisions lead to exponential blowups when the company grows. Those kinds of failures can quickly empty any wallet when the cloud meter is running. But simply choosing a stodgy database or an elaborate filter that’s merely twice as slow isn’t a deal breaker as long as it doesn’t compound.

The key is to avoid bottlenecks in the central, most crucial part of the design. Keeping the moving parts separate and independent helps ensure that they don’t interfere with each other and produce a deadly lockup. As long as the core architecture doesn’t produce gridlock, bad decisions can be covered up with faster hardware. It’s not pretty, but it’s often effective.

Consider Facebook, a company that began using PHP, one of the early tools for web applications that already felt a bit dated by the time Facebook launched. The unappealing issues, though, were ones that bothered programmers -- not users. For all the odd syntax and limited power, the approach was solid. Facebook has since spurred PHP development by creating the HHVM, a much faster version that inspired a rewrite of the PHP core. Now Facebook runs the old code much faster, and users don’t know the company settled on an early platform choice that still makes some programmers' eyes roll.

Choosing a passable solution is often cheaper than engineering a sophisticated new approach. Sitting everyone down to redesign software so that it runs smoothly and efficiently could cost a fortune. A smart programmer makes $200,000 a year -- but that can be more than millions of server hours at Amazon. Being smart often isn’t worth the trouble when more hardware is cheap and rentable by the hour.

Keeping dusty code in production

A team of managers once called me in to look at a fancy, modern web application developed with the latest ideas and the newest language (Java, at the time). The problem is that the old mainframe talking with monochromatic dumb terminals was so much faster that everyone who had to use the new code was complaining. Can’t we go back to the ’60s-era tech?

One of the new Java programmers even told me, in frustration, something like, “It’s not fair to compare us to the old green-screen app. We’re doing so much more.” By "more," he meant using fancy fonts, tasteful colors, and forms that fit into resizable windows. The same data was still moving from fingers to database, but the people answering the phones remembered how much faster it was to work with the garish green screens with their fixed-width fonts.

The latest software technology is not always an improvement. There’s a reason why hardware engineers chuckle that programmers exist to create the bazillion lines of new code to make sure the new hardware runs as slowly as the old. Otherwise there wouldn’t be a need for new hardware.

Some of the earnest programmers like to talk with serious tones about issues like “technical debt” and “continual refactoring.” They speak knowledgeably about the importance of investing in the refreshing of code. But at times all of the dreams of wiping the slate clean and rewriting everything turns into a nightmare.

It’s a tough call. If the old code is buggy or failing, rewriting is the only choice. But sometimes rebuilding an app simply to keep it current can be a big mistake. Sometimes you go backward and end up with a trendy architecture written in the latest language but filled with new, trendy bugs to go with it.

This story, "7 deadly career mistakes developers make" was originally published by InfoWorld.

7 programming languages we love to hate -- but can’t live without

The well-meaning advice to not carry a grudge certainly didn’t come from anyone who’s wrestled with a computer for a living. Toil for anytime with the infernal logic of a programming language and you’ll know the horrors of the inky void where the worst bugs dwell.

Sure, everyone loves a computer language when they first encounter it. And why wouldn’t we, with all those “hello world” examples that show how powerful the language can be in three lines of code. Programming languages are defined to be implicitly logical, but that doesn’t mean they spread logic everywhere they go. A pleasant barkeep may make the lives of everyone at the bar happier. A brave firefighter radiates bravery. But the logical mechanisms of programming languages often breed illogic, confusion, and doubt.

It’s not, well, logical to say that languages are -- Spock pause -- illogical, but we say it anyway because we know that logic has its limits. From Gödel and Turing, we’ve learned that logical mechanisms have edges where scary things occur. Sure, maybe it’s our own fault, we humans, for misusing or misprogramming. But if the programming languages force our brains into weird yoga poses, it’s hard not to blame them for our ills.

And we often can’t do anything about it. The installed base may be too large for us to jettison the language that irks us. The boss may love a stack so much he can’t hear the screams coming from the cubicle farms. The cruel truth is that there may be no better options. We’re already using the best tools that humans can build.

Following are seven programming languages we love to hate but can’t live without.

Language we love to hate: C


There are so many issues with a language that might better be called “portable assembler” than a full computer language. Does anyone like writing separate header files? Has anyone used the preprocessor for something elaborate without going slightly mad?

In theory, we’re supposed to be able to use the power of the pointer arithmetic to do superclever feats, but does anyone risk doing more than allocating data structures? Is it even a good idea to be too clever with pointers? That’s how code starts to break. If you’re able to be clever, it often requires writing a very long comment to document it, pretty much sucking up all the time you saved being clever. Can anyone remember all the rules for writing C code to avoid adding all the possible security holes, like buffer overruns?

But we have no choice. Unix is written in C, and it runs most cellphones and most of the cloud. Not everyone who writes code for these platforms needs to use C, but someone has to stay current with the asterisks and curly brackets, or else everything will fall apart. Then there are the device drivers and other embedded programs. Someone has to shoulder the load of keeping the Linux/Unix code base moving forward.

Language we love to hate: JavaScript


JavaScript’s creators tried to make something modern. It’s too bad that in their cleverness they’ve forever doomed us to a life of counting curly brackets, square brackets, and parentheses -- while ensuring that they’re properly nested. Between the anonymous functions, the closures, and the JSON data structures, our pinkies get a real workout hitting those keys.

Then there are the weird details. If x is a string that holds the character for 1, then x+1 will produce the string 11 and x-1 will produce the number zero. Does anyone remember the difference between false, null, NaN, and undefined? They sound similar, but why does JavaScript have all four of them? And why don’t they behave consistently?

It doesn’t matter how much we complain. The Internet, the World Wide Web, and a bazillion browsers aren’t going anywhere. Then the clever Node.js team came along and forced us to write JavaScript on the server. Holding out on principle will last a few seconds until we need to check our email or buy something. We’ll run JavaScript for a long time.

Language we love to hate: PHP


It’s not really a computer language. It’s more of a tool for adding a bit of smarts to static HTML. You can store information in a database and concatenate it with static tags. There might be a few more features, but it seems like all we do with PHP is glue together strings we grab from a database.

Arguing about toyish code or baby syntax isn’t worth the trouble. Most of the Web is built with PHP. Between WordPress, Joomla, and Drupal, most of the content on the Web is delivered through PHP code. Then there’s a little thing known as Facebook that was written in PHP and continues to suck up a larger and larger percentage of the time of people “on the Web.” We should be happy that Facebook built the HipHop Virtual Machine, inspiring Zend to create PHP 7.0. These new PHP engines are often twice as fast, an irresistible speed bump that will save millions in electricity and ensure we’ll write PHP long into the future.

Language we love to hate: Cobol


Cobol began in 1959, long before most of us were born. It should be obsolete with its complex syntax filled with hundreds of restricted words. Yet the Cobol lovers keep generating new versions, borrowing ideas from other languages, and bolting them onto a frame that’s almost 60 years old. Did you know there’s something called Cobol 2014? It includes dynamic tables, an idea that people have been trying to get into the language since 2002. That’s not all that’s new. Did you think it died in the ’70s? You are so wrong.

We may have better tools for writing business logic to manipulate databases, but no one seems to bother because it’s easier to buy a bigger computer and keep the Cobol code running. As I type this, there are 543 jobs listed on Dice.com with the word “Cobol” in them. There are Cobol jobs in insurance companies and defense contractors everywhere. The early adopters of mainframes still use Cobol -- and get the job done. Computer scientists may recoil in horror, but as long as customers are lining up, the bosses will say, “If it ain’t broke, don’t fix it. Just buy another mainframe.”

Language we love to hate: XSLT


Everyone starts off loving XSLT, a functional language for transforming XML. It’s a clever solution that works very well when you need to extract bits and pieces of large XML documents. But once the boss asks for something more complex than a simple search and replace, the development bogs down. The language is explicitly functional, and soon we discover that when the documentation says “variable,” it is using the word like an algebra teacher not a programmer. Ponder this Zen-like sentence from XSLT expert Bob DuCharme: “XSLT variables actually have a lot more in common with constants in many programming languages and are used for a similar purpose.” If you want to use a variable that behaves like a variable in other computer languages -- that is, it can change -- you better be very clever.

XML may be losing ground to more efficient data formats like JSON, but it’s still a powerful foundation for many big data processors. You don’t need to use XSLT. You can always write basic code that parses the text itself. However, writing all that code to parse the XML can be more work than grokking the XSLT structure.

Language we love to hate: Java


The virtual machine and the libraries may date from the ’90s, but the syntax is stuck in the 1970s when C was created. The automatic memory management seems like a big step forward until your code decides to take a knee while the garbage collection takes control. The Android developers exchange tips on when to politely request a garbage collection in advance to ensure that the garbage collector doesn’t start up in the middle of an important event, like a phone call to 911.

Java programmers have complained for a long time about many issues, some of which have been fixed or at least addressed by Oracle. But this creates a new problem. Some of the newer code and libraries can’t work with the old VMs. I spent a day trying to wrangle java.lang.UnsupportedClassVersionError but could not find a permanent solution. It’s almost as if each version of Java after 1.4 is a different language.

None of these issues matter. Java is a foundation for the Web and mobile phones. It’s the first language taught in many high schools. The collection of libraries is deeper and more valuable than almost any other language. Why would anything use anything else?


Language we love to hate: Python


It’s a modern language that the younger kids dig. The punctuation is sparse, and the code looks a bit cleaner. What’s not to love? Well, there’s the gap between Python 2.7 and 3.0. It was the only choice they had for moving the language forward, but the leap is large enough that you need to keep track of which syntax you’re using. We will forever be checking to see which version of Python is installed.

And how many people like counting all of the spaces used to indent blocks of code? Counting curly brackets is painful, but counting whitespace requires a monospace editor.

None of this matters because the soft science crowd has fallen for Python with all the warm, fuzzy emotions that kept them out of the hard sciences. Biologists and economists think Python is the only thing. Some even propose requiring Python code in new prospectuses for stocks and bonds so that investment bankers will be able to bamboozle us with Python instead of fractured lawyer-speak.

The good news is that It’s easier to read Python than the so-called English coming from the fingers of lawyers. That’s an improvement -- even if it means counting all of those spaces. The bandwagon has left the station, and it’s full of soft scientists.

This story, "7 programming languages we love to hate -- but can’t live without" was originally published by InfoWorld.

New Google tool cuts JavaScript code down to size

Working to improve mobile memory consumption in its V8 JavaScript engine, Google has developed Ignition, a JavaScript interpreter to cut overhead and boost execution of scripts. Google sees the technology offering other opportunities to increase web performance as well.

Through Ignition, V8 compiles JavaScript functions to a concise bytecode that's 25 to 50 percent the size of equivalent baseline machine code, Ross McIlroy, Google engineer for Android software, said. "This bytecode is then executed by a high-performance interpreter, which yields execution speeds on real-world websites close to those of code generated by V8's existing baseline compiler."

Adding Ignition to the script execution pipeline opens up possibilities beyond reducing V8 memory overhead, according to McIlroy. "The Ignition pipeline has been designed to enable us to make smarter decisions about when to execute and optimize code to speed up loading web pages and reduce jank and to make the interchange between V8's various components more efficient," he said.

V8 and other engines leverage JiT compilation of script to native machine code for performance purposes. With V8, the script execution pipeline has conditions requiring complex machinery to switch between the baseline compiler and two other optimizing compilers: CrankShaft and TurboFan. With this process, "JiTed" machine code can consume lots of memory even if code is executed only once. Ignition, which can replace V8's baseline compiler, executes code with less memory overhead and paves the way for a simpler script execution pipeline, McIlroy explained.

The interpreter uses low-level, architecture-independent macro-assembly instructions from TurboFan to generate bytecode handlers for op codes. TurboFan compiles instructions to the target architecture, providing low-level instruction selection and machine register allocation. "This results in highly optimized interpreter code, which can execute the bytecode instructions and interact with the rest of the V8 virtual machine in a low-overhead manner, with a minimal amount of new machinery added to the codebase," said McIlroy.

Android devices with 512MB of memory or less and running the Chrome 53 browser should enable Ignition. "Results from early experiments in the field show that Ignition reduces the memory of each Chrome tab by around 5 percent," McIlroy noted.

This story, "New Google tool cuts JavaScript code down to size" was originally published by InfoWorld.

7 deadly career mistakes developers make

You'll find no shortage of career motivational phrases surrounding failure: Fail fast, failure builds character, the key to success is failure, mistakes make you grow, never be afraid to fail. But the idea of mistaking your way to the top of the software industry is probably unsound. Every developer will have their share of missteps in a career but why not learn from others’ experience -- and avoid the costliest errors?

That’s what we did: We talked with a number of tech pros who helped us identify areas where mistakes are easily avoided. Not surprising, the key to a solid dev career involves symmetry: Not staying with one stack or job too long, for example, but then again not switching languages and employers so often that you raise red flags.

Here are some of the most notable career traps for engineers -- a minefield you can easily avoid while you navigate a tech market that’s constantly changing.

Mistake No. 1: Staying too long


These days it’s rare to have a decades-long run as a developer at one firm. In many ways, it’s a badge of honor, showing your importance to the business or at least your ability to survive and thrive. But those who have built a career at only one company may suddenly find themselves on the wrong end of downsizing or “rightsizing,” depending on the buzzword favored at the time.
Opinions vary on how long you should stay in one place. Praveen Puri, a management consultant who spent 25 years as a developer and project manager before starting his own firm, isn't afraid to throw out some numbers.

“The longer you stay in one position, the more your skills and pay stagnate, and you will get bored and restless,” Puri says. “On the other hand, if you switch multiple jobs after less than two years, it sends a red flag. In my own experience, I stayed too long on one job where I worked for 14 years -- I should have left after six. I left other positions after an average of four years, which is probably about right.”

Michael Henderson, CTO of Talent Inc., sees two major drawbacks of staying in one place too long. “First, you run the risk of limiting your exposure to new approaches and techniques,” he says, “and secondly, your professional network won’t be as deep or as varied as someone who changes teams or companies.”

Focusing too much on one stack used by your current employer obviously is great for the firm but maybe not for you.

“It’s a benefit to other employers looking for a very specialized skill set, and every business is different,” says Mehul Amin, director of engineering at Advanced Systems Concepts. “But this can limit your growth and knowledge in other areas. Obviously staying a few months at each job isn’t a great look for your résumé, but employee turnover is pretty high these days and employers expect younger workers like recent college graduates to move around a bit before staying long-term at a company.”

Mistake No. 2: Job jumping


Let’s look at the flip side: Are you moving around too much? If that’s a concern, you might ask whether you’re really getting what you need from your time at a firm.
Charles Edge, director of professional services at Apple device management company JAMF Software, says hiring managers may balk if they’re looking to place someone for a long time: “Conversely, if an organization burns through developers annually, bringing on an employee who has been at one company for 10 years might represent a challenging cultural fit. I spend a lot of time developing my staff, so I want them with me for a long time. Switching jobs can provide exposure to a lot of different techniques and technologies, though.”

Those who move on too quickly may not get to see the entire lifecycle of the project, warns Ben Donohue, VP of engineering at MediaMath.

“The danger is becoming a mercenary, a hired gun, and you miss out on the opportunity to get a sense of ownership over a product and build lasting relationships with people,” Donohue says. “No matter how talented and knowledgeable you are as a technologist, you still need the ability to see things from the perspective of a user, and it takes time in a position to get to know user needs that your software addresses and how they are using your product.”

Hilary Craft, IT branch manager at Addison Group, makes herself plain: “Constant job hopping can be seen as a red flag. Employers hire based on technical skill, dependability, and more often than not, culture fit. Stability and project completion often complement these hiring needs. For contractors, it’s a good rule to complete each project before moving to the next role. Some professionals tend to ‘rate shop’ to earn the highest hourly rate possible, but in turn burn bridges, which won’t pay off in the long run.”

Mistake No. 3: Passing on a promotion


There’s a point in every developer’s life where you wonder: Is this it? If you enjoy coding more than running the show, you might wonder if staying put could stall your career.

“Moving into management should be a cautious, thoughtful decision,” says Talent Inc.’s Henderson. “Management is a career change -- not the logical progression of the technical track -- and requires a different set of skills. Also, I’ve seen many companies push good technical talent into management because the company thinks it’s a reward for the employee, but it turns out to be a mistake for both the manager and the company.”
Get to know your own work environment, says management consultant Puri, adding that there’s no one-size-fits-all answer to this one.

“I’ve worked at some places where unhappy managers had no real power, were overloaded with paperwork and meetings, and had to play politics,” Puri says. “In those environments, it would be better to stay in development. Long term, I would recommend that everyone gets into management, because development careers stall out after 20 years, and you will not receive much more compensation.”

Another way of looking at this might be self-preservation. Scott Willson, product marketing director at Automic, asks the question: “Who will they put in your place? If not you, they may promote the most incompetent or obnoxious employee simply because losing their productivity from the trenches will not be as consequential as losing more qualified employees. Sometimes accepting a promotion can put you -- and your colleagues/friends -- in control of your workday happiness. Everyone should be in management at least once in their career if for nothing else than to gain insight into why and how management and companies operate.”

Mistake No. 4: Not paying it forward


A less obvious mistake might be staying too focused on your own career track without consideration of the junior developers in your office. Those who pair with young programmers are frequently tapped when a team needs leadership.

“I’ve found that mentoring junior developers has made me better at my job because you learn any subject deeper by teaching it than you do by any other method,” says Automic’s Willson. “Also, as developers often struggle with interpersonal skills, mentoring provides great opportunities to brush up on those people skills.”

If experience is the best teacher, teaching others will only deepen your knowledge, says JAMF Software’s Edge. That said, he doesn’t hold it against a busy developer if it hasn’t yet happened.
“Let’s face it -- no development team ever had enough resources to deliver what product management wants them to,” Edge says. “When senior developers don’t have the time to mentor younger developers, I fully understand. Just don’t say it’s because ‘I’m not good with people.’”

Mistake No. 5: Sticking to your stack


Your expertise in one stack may make you invaluable to your current workplace -- but is it helping your career? Can it hurt to be too focused on only one stack?

MediaMath’s Donohue doesn’t pull any punches on this one: “Of course it is -- there’s no modern software engineering role in which you will use only one technology for the length of your career. If you take a Java developer that has been working in Java for 10 years, and all of a sudden they start working on a JavaScript application, they’ll write it differently than someone with similar years of experience as a Python developer. Each technology that you learn influences your decisions. Some would argue that isn’t a good thing -- if you take a Java object-oriented approach to a loosely typed language like JavaScript, you’ll try to make it do things that it isn’t supposed to do.”

It can hurt your trajectory to be too focused on one stack, says Talent Inc.’s Henderson, but maybe for different reasons than you think.

“Every stack will have a different culture and perspective, which ultimately will broaden and expedite your career growth,” Henderson says. “For instance, I find that many C# developers are only aware of the Microsoft ecosystem, when there is a far larger world out there. Java has, arguably, the best ecosystem, and I often find that Java developers make the best C# developers because they have a wider perspective.”

Automic’s Willson says proficiency -- but not mastery -- with one stack should be the benchmark before moving onto another.

“It’s time to move on when you are good at the skill, but not necessarily great,” says Willson. “I’m not advocating mediocrity, just the opposite. I am saying that before you head off to learn a new skill make sure you are good, competent, or above average at that skill before you consider moving on.”

Finally, Talent Inc.’s Henderson offers this warning: “Avoid the expectation trap that each new language is simply the old one with a different syntax. Developers of C# and Java who try to force JavaScript into a classical object-oriented approach have caused much pain.”

Mistake No. 6: Neglecting soft skills


Programmers are typically less outgoing than, say, salespeople. No secret there. But soft skills can be picked up over time, and some of the nuances of developing a successful career -- like learning from mentors and developing relationships -- can be missing from your career until it’s too late.
“It makes for better software when people talk,” says MediaMath’s Donohue. “Soft skills and conversations with customers can also give a great sense of compassion that will improve how you build. You begin to think about what the customers really need instead of overengineering.”

Talent Inc.’s Henderson says your work with other people is a crucial part of developing a successful dev career.

“All human activities are social, and development is no exception,” Henderson says. “I once witnessed an exchange on the Angular mailing list where a novice developer posted some code with questions. Within an hour -- and through the help of five people -- he had rock-solid idiomatic Angular code, a richer understanding of Angular nuance and pitfalls, and several new contacts. Although the trolls can sometimes cause us to lose faith, the world is full of amazing people who want to help one another.”

Automic’s Willson says a lack of soft skills is a career killer. Then when less proficient programmers move ahead developers who don’t have people skills -- or simply aren’t exercising them -- are left wondering why. Yet everyone loves bosses, he says, “who demonstrate tact and proficient communication.”

“To improve your soft skills, the Internet, e-courses, friends, and mentors are invaluable resources if ... you are humble and remain coachable,” Willson says. “Besides, we will all reach a point in our career when we will need to lean on relationships for help. If no one is willing to stand in your corner, then you, not they, have a problem, and you need to address it. In my career, I have valued coachable people over uncoachable when I have had to make tough personnel decisions.”

Programming is only one aspect of development, says management consultant Puri. “The big part is being able to communicate and understand business objectives and ideas, between groups of people with varying levels of technical skills. I've seen too many IT people who try to communicate too much technical detail when talking with management.”

Mistake No. 7: Failing to develop a career road map


Developing goals and returning to them over time -- or conversely developing an agilelike, go-with-the-flow approach -- both have their proponents.
“I engineer less for goals and more for systems that allow me to improve rapidly and seize opportunities as they arise,” says Henderson. “That said, I recommend making a list of experiences and skills that you’d like to acquire and use it as a map, updating it at least annually. Knowing where you’ve been is as useful as knowing where you want to go.”

And of course maybe equally as important -- where you don’t want to go.

“Early in my career, I hadn’t learned to say no yet,” says Edge, of JAMF Software. “So I agreed to a project plan that there was no way could be successfully delivered. And I knew it couldn’t. If I had been more assertive, I could have influenced the plan that a bunch of nontechnical people made and saved my then-employer time and money, my co-workers a substantial amount of pain, and ultimately the relationship we had with the customer.”

Automic’s Willson gives a pep talk straight out of the playbook of University of Alabama’s head football coach Nick Saban, who preaches having faith in your process: “The focus is in following a process of success and using that process as a benchmark to hold yourself accountable. To develop your process, you need to find mentors who have obtained what you wish to obtain. Learn what they did and why they did it, then personalize, tweak, and follow.”



This story, "7 deadly career mistakes developers make" was originally published by InfoWorld.

Tuesday, August 30, 2016

Samsung Galaxy S7 Edge vs. Samsung Galaxy Note 7 – Comparison at its Best

Today we have for you a comparison between Samsung’s two new flagships on the market: Samsung Galaxy Note 7 and Samsung Galaxy S7 Edge. The two have been said to be very similar in looks, but let’s take a more in-depth look at what both the devices have to offer.
Samsung Galaxy S7 Edge

The Super AMOLED display on the Galaxy S7 Edge is quite a gorgeous one, with the curved edges we’ve seen for the first time on the Galaxy S6 Edge. The diagonal of the screen is of 5.5 inches, while the display itself holds up a resolution of 2560 x 1440 pixels. The pixel density on the display has an impressive value of 577 pixels per inch.
As for the processor, the phone shows off two different possibilities, depending on where you’re located. U.S. citizens will be met with the same processor we see on the Samsung Galaxy S7, a Snapdragon 820 quad-core paired up with an Adreno 530 GPU.
In other parts of the world, the Galaxy S7 Edge is paired up with an Exynos octa-core processor, the 8890 model, which has been paired up with a Mali-T880 MP4 graphics processing unit. Both those processors have been given 4GB of RAM to run on, while the internal storage of the flagship is of 32 GB.
In the camera department, we see a rear camera of 12MP which has a PDAF feature which stands for phase detection autofocus, as well as auto HDR. The front camera has 5MP which is a pretty decent camera for such a popular smartphone. The battery has an amount of 3600 mAh and it is non-removable.
Samsung Galaxy Note 7

The newly released phablet from Samsung, the Galaxy Note 7 has been quite popular among critics, people having little cons to bring to the device. The Galaxy Note 7 is the first device to rock the newest Gorilla Glass model, as well as being the first Note phablet to be waterproof. The S-Pen is better than ever, the stylus being water resistant as well. The similarity in look with the S7 Edge is related to the curved edges Samsung decided to give to the new phablet. This does soften the Note 7’s features, but some people think the two flagships look too much alike.
Starting with the display, we can see a 5.7 inches Super AMOLED display that also has QHD technology. The resolution of the display is of 2560 x 1440 pixels, while the pixel density has a value of 518 pixels per inch.
Not only the Galaxy S7 Edge has been sold with two processor options, but the Note 7 will as well. The first option is the Exynos octa-core processor, while the other one is the same Snapdragon 820 quad-core processor its counterpart was paired up with as well. The phablet has 4GB of RAM and 64GB of internal memory.
The camera in the back is of 12MP, same as the Galaxy S7 Edge, and holds the same options as its opponent. The front camera isn’t any different either, the Galaxy Note 7 showing off a 5MP camera. The battery has an amount of 3500 mAh and it is not removable.
Conclusion
The two are very similar in specs, some differences being found in display and battery – but nothing too drastic. If you’re going to buy one of these two flagships, it depends on what your purpose is: if you’re buying it only for personal use, you might go for the Galaxy S7 Edge, while the Galaxy Note 7 is marketed to more business-oriented people.
This story, "Up close with Google's Angular 2 JavaScript framework" was originally published by techfreakz.

Windows 10 – Why You Should Upgrade?

Windows 7, 8 and 8.1 users have two more days to upgrade to Windows 10 for free, if they have a valid license. After July 29, upgrading to Windows 10 will no longer be possible, so you should hurry up and take the step, otherwise you’ll pay $119 to purchase the Home Edition or you’ll pay $199 on the Pro Edition. In this article, we’ll focus on both the positive and negative aspects when using Windows 10.
Cortana is a direct response to Apple’s Siri and Google’s Google Now, as it provides similar functions to the desktop. Cortana was firstly introduced in Windows Phone 8.1, then it made its way to the new Windows 10 software and it allows users to control desktop functions using their voice. Soon, Apple and Android will get a version of Cortana, but that’s a different story.
Windows 10 users don’t need to create an account in order to log into their devices. Microsoft made this mistake with Windows 8 and users were put off. Gamers are drawn to Windows 10 because it supports the latest DirectX 12 graphics interface, which boosts gaming speed and reliability, while less power is consumed.
Those who want to open multiple windows at the same time and work in parallel with more applications can create virtual desktops using the “Task View” feature. Updates will be automatically installed and the Windows Media Center application has been ditched, and users will no longer have the ability to play DVD content.
Windows 10 is bad for owners of old computers with obsolete hardware, because they can’t handle the new software and they stop working properly. If you recall the case of the woman who tried to install Windows 10 and her device became unresponsive, preventing her to work.
There are some privacy concerns regarding Cortana, because many users don’t trust the digital assistant, as it collects data about their habits. Another complaint is regarding the removal of desktop gadgets, which were popular in Windows 7. “Gadgets could be exploited to harm your computer, access your computer's files, show you objectionable content, or change their behavior at any time,” was the reason invoked by Microsoft.
Impressions:
Peter Bright from ArsTechnica
The biggest problem with Windows 10 is that we have little reason to use it beyond work. At home, I have a smartphone and mobile applications and websites that do not require Windows, and sometimes they do not quite match the Windows world. Windows 10 is a reminder that the operating system is not only important in itself. Among the first things it needs is for me to log into a Microsoft account. But this feels like a tough approach to us to use Microsoft services (most unpopular and lower) as OneDrive or Bing.
Geoffrey Fowler from The Wall Street Journal
It is time that Microsoft to start from the beginning and that's exactly what Windows 10 is (...). It’s not just a refresh of the Windows operating system. Is a big, ambitious, universal OS running on applications across multiple different devices. (...) With Windows 10, Microsoft fix, finally, all the problems in Windows 8 desktop. But ignores at the same time, the challenges we have in front with Android and iOS devices.
Let us know you opinions about Windows 10 if you’ve installed it on your PC. In the next article we will present the most user bugs and fixes.

Virtual Reality Reached A New Milestone With Harvard University Students Help

Two Harvard University students are revolutionising the way that we use Virtual Reality in entertainment. Connor Doyle and Jamie Herring, the founders of the student group Convrgency and Harvard VR Lab, are mixing traditional film and 360-degree film to create the world’s first mixed reality miniseries.
Virtual Reality has sprung to the forefront of technological innovation in the past year. Since the industry is predicted to be worth $150 billion in only a few years, it is easy to imagine the potential impacts that the technology can have on all varieties of industry. One of these industries is entertainment, film, and media.
At present, companies such as Vrideo, and people such as Chris Milk are leading that industry, creating content that focuses on transporting people to beautiful and otherwise inaccessible locations and times through the VR medium. However, Doyle and Herring do not see the long-term potential in this. “Film and TV’s continuing success,” says Herring, “has been dependent upon people wanting to see things again and again. The same is true of theatre. People want stories to immerse themselves in. VR is the definition of immersion. It shouldn’t be used as a teleport, but as a window. It’s a new way to tell a story by experiencing it.”
Vrinc is Convrgency’s new story. Set in a dystopian reality, not too different from our own, it presents a future where VR has become a means to escape into your own paradise. However, when their reality starts to become virtual, the lines are blurred between what’s real or not. It builds upon the success of Mr Robot, in its realist pretence, but in the rapidly evolving form of VR. Doyle, the director of the project, states, “there seems to be this underlying feeling within society right now that we are losing our grip on our own reality. We’re constantly on our phones, texting more than we are talking, catching Pokemon on the streets… People are unnerved by it. A story where we spend more of our time in the virtual than in the real isn’t a stretch of the imagination at this moment in time.”
The movement between traditional film and 360-degree video in Episode 3 of the miniseries is unprecedented in the VR industry right now. There is a real sense while watching that you suddenly become part of this world that you have been discomforted by in the first part of the miniseries. Doyle and Herring have managed to remove the distance between film and technology by making the technology part of the story. The movement towards VR becomes as necessary as the plot.
Virtual Reality is one of the most exciting evolutions in recent years. With the dropping prices of such products as Samsung Gear and Google Cardboard, VR is becoming increasingly available to consumers who want to be entertained. Competing against the big players of Jaunt, Oculus, Samsung, and Google, Doyle and Herring want to be breaking new ground in film, theatre, and VR.
Focusing on story, Convrgency are creating immersive content that is scalable, insightful, and intriguing. Using their theatre backgrounds, there is a clear sense of audience connection. The beauty of VR is that it’s a theatre for one audience member. You have the front row seat. But Convrgency go one step further and casts the audience as the lead role.
Vrinc is available to watch at vrinc.io and will conclude at the end of September.

How Driverless Cars May Interact With People

SAN FRANCISCO — There are plenty of unanswered questions about how self-driving cars would function in the real world, like understanding local driving customs and handing controls back to a human in an emergency.

Now a start-up called Drive.ai, based in Mountain View, Calif., is trying to address how an autonomous car would communicate with other drivers and pedestrians. The company is emphasizing what is known in the artificial intelligence field as “human-machine interaction” as a key to confusing road situations.
How does a robot, for example, tell everyone what it plans to do in intersections when human drivers and people in crosswalks go through an informal ballet to decide who will go first and who will yield?
“Most people’s first interaction with self-driving cars will not be as a rider, but more likely as a pedestrian crossing the street,” said Carol Reiley, the co-founder and president of Drive.ai. “I think it is so important for everyone to trust this type of technology.”
The start-up gained some attention earlier this year when it received a license from the State of California to test driverless cars on the road. But Tuesday was the first time its executives outlined, at least in broad terms, what they planned to do. They would not discuss the company’s investors.
The Drive.ai cars won’t speak with pedestrians and bicyclists. But they will try to communicate with visual displays that go beyond today’s turn signals, perhaps with bannerlike text and easily identifiable sounds, company officials said.
The company, populated by graduate students and researchers from the Stanford Artificial Intelligence Laboratory, is entering a crowded field in the race to self-driving vehicles. There are about 20 self-driving car projects in Silicon Valley and more than four dozen around the country.
Unlike many of the efforts, however, Drive.ai will not attempt to build cars. Instead, it plans to retrofit commercial fleets for tasks like parcel delivery and taxi services.
The company is leaning on a technology called deep learning, a machine-learning technique that has gained wide popularity among Silicon Valley firms. It is used for a variety of tasks, like understanding human speech and improving the ability to recognize objects in computer vision systems.
An Israeli firm, Mobileye, is the dominant supplier of vision technology to the automotive industry, but Silicon Valley companies like Nvidia are also starting to compete for that business.
The self-driving cars of the future will need to be transparent about what their intentions are, how they make decisions and what they see, said Ms. Reiley, who is a roboticist with a background in designing underwater robotics and medical systems. They will need to communicate clearly both with the world around them as well as with their passengers.
“There’s the left brain in which a lot of discussion has taken place, what algorithms and what sensors, the logical side,” she said. “A lot of the discussion around self-driving cars has no human component, which is really weird because this is the first time a robotic system is going out in the world and interacting with people.”