Friday, March 20, 2015

8 Android Apps To Boost Your Productivity

 Everyone has a smartphone these days. Most of us use the smartphone for emails, social media updates etc. The smartphone has made everything a lot easier. You are connected with the world all the time. Being connected all the time can be a great drawback as, it’s the big distraction and it eventually leads to decrease in productivity.
android apps, top 8 apps, top 8 free productivity apps, 1password, pocket, buffer, google keep, mailbox, quip, basecamp, microsoft office 365, Google android apps.



True professionals are always looking for newest and most effective ways to make their everyday work life efficient. There are number of productivity apps available for Android, iOS and Windows Phone. It is important to choose the best productivity apps. Today we have listed top eight apps for Android that will help you increase your workplace productivity and result in faster workflow.

1. 1Password:

1Password is beautiful, secure app for passwords and other sensitive information that you need to keep safe. You can store thousands of wallet items at free of cost. The app promises the security of you vital information. You can even store Credit card credentials, secure notes, passports, bank account information and even email attachment documents. The app is available for Android, iOS and Chrome. This is a cross-platform solution to all your sensitive information. The free version of app provides almost all features but, you can upgrade to premium version that offers features like adding new login items, secure notes and identities from your Android device.

2. Pocket:

Pocket is the perfect mobile companion that lets you save articles, videos and other web content to view later. The app has very user-friendly and intriguing user interface. The app is dedicated to your time to reading everything you come in contact with. You can simply bookmark everything you want to read later. The service was initially introduced as Read It Later but, later it went under complete overhaul and they reintroduced it as Pocket. The app is available for free on almost all the platforms.

3. Buffer:

Buffer helps you gain total control over when and what you post online. You can use Buffer to schedule your tweets, Facebook posts and any other content at predetermined time. The application is designed to determining the detailed metrics on how particulate post is performing. This helps you analyse your social media presence. If you are a media or marketing person and you deal with social networks such as Google+, Facebook, Twitter, LinkedIn on daily basis, this is the best solution for you.

4. Google Keep:

Google Keep is the ultimate note taking application out there. The app is appeared without any hype. It has simple and user-friendly interface. It has deep integration with Google services, you can easily create the new note and even the whole note board and stick anything you want. The app provides great features like snippets, checklists. You can even add photos and reminders. It backs up all the data on Google Drive so, you don’t have to worry about losing the data.

5. Mailbox:

Mailbox has simple and minimalistic user interface and all the features that are needed in an email client. You can work with Mailbox with simple gestures. The app currently supports Gmail and iCloud accounts only. You can easily swipe messages to archive or trash them. The app helps you snooze the mail notifications as well. The application is designed to help you learn frequent actions and snooze unimportant conversations.

6. Quip:

Quip is a great collaborative tool. The app has interesting features of messaging and instant group editing files. You can create, import, share different types of documents, notes, to-do lists and even edit them very easily. The app works well with Dropbox, Google Drive and Evernote. You can save or export them in different formats. The app is available for free for Android and iOS.

7. Basecamp:

Basecamp is good solution for a whole company. It offers a platform to keep track of your current project or to-do lists. You can easily add documents to Basecamp from Google Drive. The platform also offers its own storage space where you can upload your own files. The app is available for free up to certain number of projects. It has multiple plans that are suitable and affordable for everyone.

8. Microsoft Office 365:

Sometimes you just have to rely on Microsoft office for your workplace documents. If your workplace requires you to stick to Microsoft ecosystem, this is a great app for your smartphone. Accessing MS office documents on Android smartphone has become easier with Windows’ cloud capabilities. The app is available in both free and paid version. You can access and edit your Office files in free version. To use cloud capabilities of platform, you need to pay a subscription fee. This is a great file editing tool for heavy Microsoft users. 





Courtesy;- Nfytymes

Learn About Java Technology

Java is the foundation for virtually every type of networked application and is the global standard for developing and delivering embedded and mobile applications, games, Web-based content, and enterprise software. With more than 9 million developers worldwide, Java enables you to efficiently develop, deploy and use exciting applications and services.

From laptops to datacenters, game consoles to scientific supercomputers, cell phones to the Internet, Java is everywhere!






Why Software Developers Choose Java

Java has been tested, refined, extended, and proven by a dedicated community of Java developers, architects and enthusiasts. Java is designed to enable development of portable, high-performance applications for the widest range of computing platforms possible. By making applications available across heterogeneous environments, businesses can provide more services and boost end-user productivity, communication, and collaboration—and dramatically reduce the cost of ownership of both enterprise and consumer applications. Java has become invaluable to developers by enabling them to:
  • Write software on one platform and run it on virtually any other platform
  • Create programs that can run within a web browser and access available web services
  • Develop server-side applications for online forums, stores, polls, HTML forms processing, and more
  • Combine applications or services using the Java language to create highly customized applications or services
  • Write powerful and efficient applications for mobile phones, remote processors, microcontrollers, wireless modules, sensors, gateways, consumer products, and practically any other electronic device


Some Ways Software Developers Learn Java


Register for JavaOneMany colleges and universities offer courses in programming for the Java platform. The Oracle Academy provides a complete portfolio of software, curriculum, hosted technology, faculty training, support, and certification resources to K-12, vocational, and higher education institutions for teaching use, including a Java offering that will support hundreds of thousands of students. In addition, developers can also enhance their Java programming skills by reading the Oracle Java developer web site, subscribing to Java technology-focused newslettersand Java Magazine, using the Java Tutorial and the New to Java Programming Center, and signing up for Web, virtual, or instructor-led courses and certification.



OTN logoThe Oracle Technology Network is the world's largest community of application developers, database administrators, system admins/developers, and architects using industry-standard technologies in combination with Oracle products. It is also the home ofjava.oracle.com , the ultimate, complete, and authoritative source of technical information about Java. Membership is free, join today! (In your Profile, check the Oracle Technology Network box under My Community Memberships.)



Young Developers Learn Java

Young people are learning programming languages from the earliest ages and up. Visual educational tools such as Alice, Greenfoot and BlueJ tools teach young people how to program using the Java programming language, and Java-based languages developed for ease of use.


What Is JavaFX


JavaFX is powered by Java. The JavaFX platform enables application developers to easily create and deploy Rich Internet Applications (RIA) that behave consistently across multiple platforms. JavaFX extends the power of Java by allowing developers to use any Java library within JavaFX applications. This way developers can expand their capabilities in Java and make use of the presentation technology that JavaFX provides to build engaging visual experiences.
To learn more about JavaFX technology, visit JavaFX web site for more information.

I'm a Web Developer. Not an Engineer.

Something I’m starting to realise and accept for myself: I’m a web developer. Not an engineer. Mostly an approach & understanding difference

This seemed to strike a chord with a few people, and others asked if I could embellish on those thoughts. So here it is

On titles

"We" love titles. Personally, in the last 5 years, I've preferred the sound of being an engineer, or a JavaScript engineer. I actually quite like software designer. But let's face it, it's all fluff.
Actually, where titles aren't fluff, is when you work in a larger company. Quite often titles relate directly to pay scales. I digress.
The engineer bit is a little less fluffy and does actually carry meaning.

Engineers

Calling yourself an engineer, when you're not actually qualified as an engineer is sort of a no-no. In fact, there's some regulationaround the engineering titles - and certainly in the UK (according to the Wikipedia page), there's been petitions to protect the title.
Ignoring the regulations, to me, engineers are smart, educated (ideally in a form of engineering...) and specifically: solving complex computer engineering problems.
In a conversation (over twitter) I was having with Trek Glowacki, midway he replied with:
Take a step back and understand the real engineering goal: SRP
I wasn't aware of any engineering goal, nor what SRP stood for, nor did I really understand what the Wikipedia page was saying to me.
Honestly, I kind of feel out of my depth amongst engineers aka those people who *really" took the time to study computer science and grok the shit out of it.

I am web developer

I don't know why I thought it was uncool to be a "web developer". Perhaps because it's utterly vague.
What "web developer" does mean to me though, is this:
Someone who writes code for browsers. Likely from the school of view source, is comfortable with drop-in libraries, understands standards and best practice techniques. But mostly, a tinkerer.

I am not alone

Tim Berners-Lee: web developer
The picture above is an (infamous) screen grab from the Web Platform introduction video. Of Sir Tim Berners-Lee. The creator of The Web.
I saw a lot of posts and tweets suggesting that this was a joke, or a disservice to his work, or that the title was utterly understated.
I'm not sure it was. There's no doubt that this man is responsible for a lot more, but he is The Web Developer.
I'm proud to say that I work on that same web. Making it better (and sometimes, perhaps, a little worse).

Embrace it

I've personally learnt my web development skills over a long period of time and nearly entirely through trial and error. I suspect most of us have.
Being a web developer doesn't have to mean you have to be a JavaScript wiz, or that you don't touch the server side. By virtue of tinkering, it's quite likely you're getting better at all these things.
Coming to this realisation is by no means a slur against those people who call themselves engineers. If you do, it's quite likely I already have a great deal of respect for your knowledge and understanding if I've come across your work already.
As for me, I'm proud to say: I am a web developer. I am a tinkerer.
Published at DZone

Microsoft open-sources .Net build system

MSBuild, the build platform for Microsoft's Visual Studio toolset and the .Net Platform, is now open source, marking the software giant's latest embrace of the open source movement.
open door key lock latch
Made available on GitHub and contributed to the .Net Foundation, MSBuild, or Microsoft.Build, serves as a platform for building applications. It's the default engine for Visual Studio and the .Net community on the Windows platform.
"By invoking msbuild.exe on your project or solution file, you can orchestrate and build products in environments where Visual Studio isn't installed. For instance, MSBuild is used to build the .Net Core Libraries and .Net Core Runtime open source projects," Microsoft's Rich Lander said in a blog post on Wednesday.
"We will be adding Linux and Mac support soon (perhaps with your help!) so you can use MSBuild to build the open source .Net projects on your preferred platform," Lander said. "We'll initially start with Mono and look to port the code to run on .Net Core. But we're really just getting started on our ports. We wanted to open up the code first so that we could all enjoy the cross-platform journey from the outset."
MSBuild provides an XML schema for a project file controlling how the build platform builds software. The MSBuild sources published Wednesday are closely aligned with the version to ship with Visual Studio 2015, Lander said. For now, Visual Studio is required to build the first time.
While initially gaining a reputation as an open source opponent some years ago, Microsoft has been taking steps to embrace the concept in recent years. The .Net Foundation, for instance, was formed nearly a year ago to promote .Net technologies in the open source vein. The company at that time also made its Roslyn compiler project open source. Last November, Microsoft revealed intentions to offer its server-side cloud stack via open source through its .Net Core initiative. In February, Microsoft open-sourced CoreCLR, the .Net execution engine in .Net Core.

This story, "Microsoft open-sources .Net build system" was originally published by InfoWorld.

It came from the cloud: The looming change in IT

Guess what? PaaS -- the cloud platform to automate dev, test, and deployment of applications -- is back.
Cloudy sky with flying saucers
For years, so-called thought leaders including myself have run off at the mouth about how IaaS is more of a stopgap, and eventually everything will be SaaS and PaaS. After all, why should you lovingly craft each installation yourself? Why should you build your own PaaS on IaaS with Puppet? Instead, the commoditization of computing will enable you to treat everything as either an application or a resource. PaaS must surely triumph.

For three years, during what I call the “great PaaS pause,” it didn't happen. Now, according to Pivotal, IBM, and others, the switch to PaaS is finally starting to take root.
Everything that has happened before will happen again. In the late '90s, I worked for the IS/IT department in the American headquarters of a now defunct cellphone manufacturer. Andersen Consulting had the bright idea that everything should be run as a project and IT should bill for it, but lacked the corresponding idea that after a project is finished, the resulting software requires maintenance. 
Each project by each department tended to get its own server until the rack was full. Then we'd push back and ask if they might care to combine applications on servers. You see, we would buy Compaq servers left and right (all for top dollar because the procurement was outsourced to Price Waterhouse, which charged at least four times what I could get from CDW in one-tenth of the time), but real estate for another rack was at a premium.
No one wanted to share, of course, because everyone wanted their applications done yesterday and on the cheap. Development was expensive, but maintenance was free, so crapware was the corporate standard.
Within a few years this entire culture would be wiped out. Virtualization changed it rather dramatically. These crappy departmental stovepipe applications no longer had to get along; they would all have their own little sandbox to crash in, and no one but the users would care.
The VMs, however, still belonged at least tangentially to the departments that funded or requested the project and caused them to exist. IT is still largely the caretaker for them. Often, resources are purchased based on the number of VMs procured, and a huge amount of computing is wasted running another copy of Windows for the sake of faking that you having a separate server.
Consider what changes would occur if IT moved all internal applications to an internally managed private PaaS. There is one general price to providing the service of running the application. The incremental price for using resources should also scale with the business value of the application. Next, consider what this does to the games played around maintenance.
In the VM world, once the application is deployed in production, developers largely can blame IT for anything that goes wrong. Frequently, the folks in charge of the VM are also the ones who have to prove that the application is crashing because it has a bug. In the PaaS world, the developers have to prove the application on the PaaS which IT maintains is working. If there is a genuine issue with the components of the PaaS, then presumably more applications would be failing. The question of “was it the OS or the runtime or the app?” answers itself.
Consider what this does for the process of QA and development. While you may have a development versus a QA versus a production database, and you may have a few installations based on that configuration, there's no reason for ceremonial build-outs of different environments. Instead, containers may be configured with different security policies. Moreover, applications can be “promoted” to QA or stage or prod. They don't necessarily even need to be reinstalled, simply pointed at a different database.
But what about the public PaaS? Supposedly the new PaaS model will be “hybrid” for big companies concerned about moving things into the public cloud or who have too much data generated internally to make it cost effective. Ultimately, the public PaaS is less expensive and scales better and you basically outsource a lot of the monkey work that IT is forced to do. Instead, IT moves more to data stewardship, policy enforcement, and support.
PaaS will be every as much of a culture change as virtualization, if not more. Ultimately, this change will bubble out of IT and even change the way the business works. But first, PaaS will change the way we do everything in IT -- and almost certainly change the lives of everyday developers.


This story, "It came from the cloud: The looming change in IT" was originally published by InfoWorld.

Top 10 Easy Performance Optimisations in Java

There has been a lot of hype about the buzzword “web scale“, and people are going through lengths of reorganising their application architecture to get their systems to “scale”.
But what is scaling, and how can we make sure that we can scale?

Different aspects of scaling

The hype mentioned above is mostly about scaling load, i.e. to make sure that a system that works for 1 user will also work well for 10 users, or 100 users, or millions. Ideally, your system is as “stateless” as possible such that the few pieces of state that really remain can be transferred and transformed on any processing unit in your network. When load is your problem, latency is probably not, so it’s OK if individual requests take 50-100ms. This is often also referred to as scaling out
An entirely different aspect of scaling is about scaling performance, i.e. to make sure that an algorithm that works for 1 piece of information will also work well for 10 pieces, or 100 pieces, or millions. Whether this type of scaling is feasible is best described by Big O Notation. Latency is the killer when scaling performance. You want to do everything possible to keep all calculation on a single machine. This is often also referred to as scaling up
If there was anything like free lunch (there isn’t), we could indefinitely combine scaling up and out. Anyway, today, we’re going to look at some very easy ways to improve things on the performance side.

Big O Notation

Java 7’s ForkJoinPool as well as Java 8’s parallel Stream help parallelising stuff, which is great when you deploy your Java program onto a multi-core processor machine. The advantage of such parallelism compared to scaling across different machines on your network is the fact that you can almost completely eliminate latency effects, as all cores can access the same memory.
But don’t be fooled by the effect that parallelism has! Remember the following two things:
  • Parallelism eats up your cores. This is great for batch processing, but a nightmare for asynchronous servers (such as HTTP). There are good reasons why we’ve used the single-thread servlet model in the past decades. So parallelism only helps when scaling up.
  • Parallelism has no effect on your algorithm’s Big O Notation. If your algorithm is O(n log n), and you let that algorithm run on c cores, you will still have an O(n log n / c) algorithm, as c is an insignificant constant in your algorithm’s complexity. You will save wall-clock time, but not reduce complexity!
The best way to improve performance, of course, is by reducing algorithm complexity. The killer is achieve O(1) or quasi-O(1), of course, for instance a HashMap lookup. But that is not always possible, let alone easy.
If you cannot reduce your complexity, you can still gain a lot of performance if you tweak your algorithm where it really matters, if you can find the right spots. Assume the following visual representation of an algorithm:
algorithm
The overall complexity of the algorithm is O(N3), or O(N x O x P) if we want to deal with individual orders of magnitude. However, when profiling this code, you might find a funny scenario:
  • On your development box, the left branch (N -> M -> Heavy operation) is the only branch that you can see in your profiler, because the values for O and P are small in your development sample data.
  • On production, however, the right branch (N -> O -> P -> Easy operation or also N.O.P.E.) is really causing trouble. Your operations team might have figured this out using AppDynamics, or DynaTrace, or some similar software.
Without production data, you might quickly jump to conclusions and optimise the “heavy operation”. You ship to production and your fix has no effect.
There are no golden rules to optimisation apart from the facts that:
  • A well-designed application is much easier to optimise
  • Premature optimisation will not solve any performance problems, but make your application less well-designed, which in turn makes it harder to be optimised
Enough theory. Let’s assume that you have found the right branch to be the issue. It may well be that a very easy operation is blowing up in production, because it is called lots and lots of times (if NO, and P are large). Please read this article in the context of there being a problem at the leaf node of an inevitable O(N3) algorithm. These optimisations won’t help you scale. They’ll help you save your customer’s day for now, deferring the difficult improvement of the overall algorithm until later!
Here are the top 10 easy performance optimisations in Java:

1. Use StringBuilder

This should be your default in almost all Java code. Try to avoid the +operator. Sure, you may argue that it is just syntax sugar for aStringBuilder anyway, as in:
1
String x = "a" + args.length + "b";
… which compiles to
 0  new java.lang.StringBuilder [16]
 3  dup
 4  ldc <String "a"> [18]
 6  invokespecial java.lang.StringBuilder(java.lang.String) [20]
 9  aload_0 [args]
10  arraylength
11  invokevirtual java.lang.StringBuilder.append(int) : java.lang.StringBuilder [23]
14  ldc <String "b"> [27]
16  invokevirtual java.lang.StringBuilder.append(java.lang.String) : java.lang.StringBuilder [29]
19  invokevirtual java.lang.StringBuilder.toString() : java.lang.String [32]
22  astore_1 [x]
But what happens, if later on, you need to amend your String with optional parts?
1
2
3
4
String x = "a" + args.length + "b";
 
if (args.length == 1)
    x = x + args[0];
You will now have a second StringBuilder, that just needlessly consumes memory off your heap, putting pressure on your GC. Write this instead:
1
2
3
4
5
6
StringBuilder x = new StringBuilder("a");
x.append(args.length);
x.append("b");
 
if (args.length == 1);
    x.append(args[0]);
Takeaway
In the above example, it is probably completely irrelevant if you’re using explicit StringBuilder instances, or if you rely on the Java compiler creating implicit instances for you. But remember, we’re in the N.O.P.E. branch. Every CPU cycle that we’re wasting on something as stupid as GC or allocating a StringBuilder‘s default capacity, we’re wasting N x O x Ptimes.
As a rule of thumb, always use a StringBuilder rather than the + operator. And if you can, keep the StringBuilder reference across several methods, if your String is more complex to build. This is what jOOQ does when you generate a complex SQL statement. There is only one StringBuilder that “traverses” your whole SQL AST (Abstract Syntax Tree)
And for crying out loud, if you still have StringBuffer references, do replace them by StringBuilder. You really hardly ever need to synchronize on a string being created.

2. Avoid regular expressions

Regular expressions are relatively cheap and convenient. But if you’re in the N.O.P.E. branch, they’re about the worst thing you can do. If you absolutely must use regular expressions in computation-intensive code sections, at least cache the Pattern reference instead of compiling it afresh all the time:
1
2
static final Pattern HEAVY_REGEX =
    Pattern.compile("(((X)*Y)*Z)*");
But if your regular expression is really silly like
1
String[] parts = ipAddress.split("\\.");
… then you really better resort to ordinary char[] or index-based manipulation. For example this utterly unreadable loop does the same thing:
1
2
3
4
5
6
7
8
9
10
11
12
int length = ipAddress.length();
int offset = 0;
int part = 0;
for (int i = 0; i < length; i++) {
    if (i == length - 1 ||
            ipAddress.charAt(i + 1) == '.') {
        parts[part] =
            ipAddress.substring(offset, i + 1);
        part++;
        offset = i + 2;
    }
}
… which also shows why you shouldn’t do any premature optimisation. Compared to the split() version, this is unmaintainable.
Challenge: The clever ones among your readers might find even fasteralgorithms.
Takeaway
Regular expressions are useful, but they come at a price. If you’re deep down in a N.O.P.E. branch, you must avoid regular expressions at all costs. Beware of a variety of JDK String methods, that use regular expressions, such as String.replaceAll(), or String.split().
Use a popular library like Apache Commons Lang instead, for your String manipulation.

3. Do not use iterator()

Now, this advice is really not for general use-cases, but only applicable deep down in a N.O.P.E. branch. Nonetheless, you should think about it. Writing Java-5 style foreach loops is convenient. You can just completely forget about looping internals, and write:
1
2
3
for (String value : strings) {
    // Do something useful here
}
However, every time you run into this loop, if strings is an Iterable, you will create a new Iterator instance. If you’re using an ArrayList, this is going to be allocating an object with 3 ints on your heap:
1
2
3
4
5
private class Itr implements Iterator<E> {
    int cursor;
    int lastRet = -1;
    int expectedModCount = modCount;
    // ...
Instead, you can write the following, equivalent loop and “waste” only a single int value on the stack, which is dirt cheap:
1
2
3
4
5
int size = strings.size();
for (int i = 0; i < size; i++) {
    String value : strings.get(i);
    // Do something useful here
}
… or, if your list doesn’t really change, you might even operate on an array version of it:
1
2
3
for (String value : stringArray) {
    // Do something useful here
}
Takeaway
Iterators, Iterable, and the foreach loop are extremely useful from a writeability and readability perspective, as well as from an API design perspective. However, they create a small new instance on the heap for each single iteration. If you run this iteration many many times, you want to make sure to avoid creating this useless instance, and write index-based iterations instead.
Discussion
Some interesting disagreement about parts of the above (in particular replacing Iterator usage by access-by-index) has been discussed on Reddit here.

4. Don’t call that method

Some methods are simple expensive. In our N.O.P.E. branch example, we don’t have such a method at the leaf, but you may well have one. Let’s assume your JDBC driver needs to go through incredible trouble to calculate the value of ResultSet.wasNull(). Your homegrown SQL framework code might look like this:
1
2
3
4
5
6
7
8
9
10
if (type == Integer.class) {
    result = (T) wasNull(rs,
        Integer.valueOf(rs.getInt(index)));
}
 
// And then...
static final <T> T wasNull(ResultSet rs, T value)
throws SQLException {
    return rs.wasNull() ? null : value;
}
This logic will now call ResultSet.wasNull() every time you get an int from the result set. But the getInt() contract reads:
Returns: the column value; if the value is SQL NULL, the value returned is 0
Thus, a simple, yet possibly drastic improvement to the above would be:
1
2
3
4
5
6
7
8
static final <T extends Number> T wasNull(
    ResultSet rs, T value
)
throws SQLException {
    return (value == null ||
           (value.intValue() == 0 && rs.wasNull()))
        ? null : value;
}
So, this is a no-brainer:
Takeaway
Don’t call expensive methods in an algorithms “leaf nodes”, but cache the call instead, or avoid it if the method contract allows it.

5. Use primitives and the stack

The above example is from jOOQ, which uses a lot of generics, and thus is forced to use wrapper types for byteshortint, and long – at least beforegenerics will be specialisable in Java 10 and project Valhalla. But you may not have this constraint in your code, so you should take all measures to replace:
1
2
// Goes to the heap
Integer i = 817598;
… by this:
1
2
// Stays on the stack
int i = 817598;
Things get worse when you’re using arrays:
1
2
// Three heap objects!
Integer[] i = { 1337, 424242 };
… by this:
1
2
// One heap object.
int[] i = { 1337, 424242 };
Takeaway
When you’re deep down in your N.O.P.E. branch, you should be extremely wary of using wrapper types. Chances are that you will create a lot of pressure on your GC, which has to kick in all the time to clean up your mess.
A particularly useful optimisation might be to use some primitive type and create large, one-dimensional arrays of it, and a couple of delimiter variables to indicate where exactly your encoded object is located on the array.
An excellent library for primitive collections, which are a bit more sophisticated than your average int[] is trove4j, which ships with LGPL.
Exception
There is an exception to this rule: boolean and byte have few enough values to be cached entirely by the JDK. You can write:
1
2
3
4
5
Boolean a1 = true; // ... syntax sugar for:
Boolean a2 = Boolean.valueOf(true);
 
Byte b1 = (byte) 123; // ... syntax sugar for:
Byte b2 = Byte.valueOf((byte) 123);
The same is true for low values of the other integer primitive types, including charshortintlong.
But only if you’re auto-boxing them, or calling TheType.valueOf(), not when you call the constructor!
Never call the constructor on wrapper types, unless you really want a new instance
Off heap
Of course, you might also want to experiment with off-heap libraries, although they’re more of a strategic decision, not a local optimisation.
An interesting article on that subject by Peter Lawrey and Ben Cotton is:OpenJDK and HashMap… Safely Teaching an Old Dog New (Off-Heap!) Tricks

6. Avoid recursion

Modern functional programming languages like Scala encourage the use of recursion, as they offer means of optimising tail-recursing algorithms back into iterative ones. If your language supports such optimisations, you might be fine. But even then, the slightest change of algorithm might produce a branch that prevents your recursion from being tail-recursive. Hopefully the compiler will detect this! Otherwise, you might be wasting a lot of stack frames for something that might have been implemented using only a few local variables.
Takeaway
There’s not much to say about this apart from: Always prefer iteration over recursion when you’re deep down the N.O.P.E. branch

7. Use entrySet()

When you want to iterate through a Map, and you need both keys andvalues, you must have a very good reason to write the following:
1
2
3
for (K key : map.keySet()) {
    V value : map.get(key);
}
… rather than the following:
1
2
3
4
for (Entry<K, V> entry : map.entrySet()) {
    K key = entry.getKey();
    V value = entry.getValue();
}
When you’re in the N.O.P.E. branch, you should be wary of maps anyway, because lots and lots of O(1) map access operations are still lots of operations. And the access isn’t free either. But at least, if you cannot do without maps, use entrySet() to iterate them! The Map.Entry instance is there anyway, you only need to access it.
Takeaway
Always use entrySet() when you need both keys and values during map iteration.

8. Use EnumSet or EnumMap

There are some cases where the number of possible keys in a map is known in advance – for instance when using a configuration map. If that number is relatively small, you should really consider using EnumSet orEnumMap, instead of regular HashSet or HashMap instead. This is easily explained by looking at EnumMap.put():
1
2
3
4
5
6
7
8
private transient Object[] vals;
 
public V put(K key, V value) {
    // ...
    int index = key.ordinal();
    vals[index] = maskNull(value);
    // ...
}
The essence of this implementation is the fact that we have an array of indexed values rather than a hash table. When inserting a new value, all we have to do to look up the map entry is ask the enum for its constant ordinal, which is generated by the Java compiler on each enum type. If this is a global configuration map (i.e. only one instance), the increased access speed will help EnumMap heavily outperform HashMap, which may use a bit less heap memory, but which will have to run hashCode() and equals() on each key.
Takeaway
Enum and EnumMap are very close friends. Whenever you use enum-like structures as keys, consider actually making those structures enums and using them as keys in EnumMap.

9. Optimise your hashCode() and equals() methods

If you cannot use an EnumMap, at least optimise your hashCode() andequals() methods. A good hashCode() method is essential because it will prevent further calls to the much more expensive equals() as it will produce more distinct hash buckets per set of instances.
In every class hierarchy, you may have popular and simple objects. Let’s have a look at jOOQ’s org.jooq.Table implementations.
The simplest and fastest possible implementation of hashCode() is this one:
1
2
3
4
5
6
7
8
9
10
// AbstractTable, a common Table base implementation:
 
@Override
public int hashCode() {
 
    // [#1938] This is a much more efficient hashCode()
    // implementation compared to that of standard
    // QueryParts
    return name.hashCode();
}
… where name is simply the table name. We don’t even consider the schema or any other property of the table, as the table names are usually distinct enough across a database. Also, the name is a string, so it has already a cached hashCode() value inside.
The comment is important, because AbstractTable extendsAbstractQueryPart, which is a common base implementation for any AST (Abstract Syntax Tree) element. The common AST element does not have any properties, so it cannot make any assumptions an optimisedhashCode() implementation. Thus, the overridden method looks like this:
1
2
3
4
5
6
7
8
9
10
// AbstractQueryPart, a common AST element
// base implementation:
 
@Override
public int hashCode() {
    // This is a working default implementation.
    // It should be overridden by concrete subclasses,
    // to improve performance
    return create().renderInlined(this).hashCode();
}
In other words, the whole SQL rendering workflow has to be triggered to calculate the hash code of a common AST element.
Things get more interesting with equals()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
// AbstractTable, a common Table base implementation:
 
@Override
public boolean equals(Object that) {
    if (this == that) {
        return true;
    }
 
    // [#2144] Non-equality can be decided early,
    // without executing the rather expensive
    // implementation of AbstractQueryPart.equals()
    if (that instanceof AbstractTable) {
        if (StringUtils.equals(name,
            (((AbstractTable<?>) that).name))) {
            return super.equals(that);
        }
 
        return false;
    }
 
    return false;
}
First thing: Always (not only in a N.O.P.E. branch) abort every equals()method early, if:
  • this == argument
  • this "incompatible type" argument
Note that the latter condition includes argument == null, if you’re usinginstanceof to check for compatible types. We’ve blogged about this before in 10 Subtle Best Practices when Coding Java.
Now, after aborting comparison early in obvious cases, you might also want to abort comparison early when you can make partial decisions. For instance, the contract of jOOQ’s Table.equals() is that for two tables to be considered equal, they must be of the same name, regardless of the concrete implementation type. For instance, there is no way these two items can be equal:
  • com.example.generated.Tables.MY_TABLE
  • DSL.tableByName("MY_OTHER_TABLE")
If the argument cannot be equal to this, and if we can check that easily, let’s do so and abort if the check fails. If the check succeeds, we can still proceed with the more expensive implementation from super. Given that most objects in the universe are not equal, we’re going to save a lot of CPU time by shortcutting this method.
some objects are more equal than others
In the case of jOOQ, most instances are really tables as generated by the jOOQ source code generator, whose equals() implementation is even further optimised. The dozens of other table types (derived tables, table-valued functions, array tables, joined tables, pivot tables, common table expressions, etc.) can keep their “simple” implementation.

10. Think in sets, not in individual elements

Last but not least, there is a thing that is not Java-related but applies to any language. Besides, we’re leaving the N.O.P.E. branch as this advice might just help you move from O(N3) to O(n log n), or something like that.
Unfortunately, many programmers think in terms of simple, local algorithms. They’re solving a problem step by step, branch by branch, loop by loop, method by method. That’s the imperative and/or functional programming style. While it is increasingly easy to model the “bigger picture” when going from pure imperative to object oriented (still imperative) to functional programming, all these styles lack something that only SQL and R and similar languages have:
Declarative programming.
In SQL (and we love it, as this is the jOOQ blog) you can declare the outcome you want to get from your database, without making any algorithmic implications whatsoever. The database can then take all the meta data available into consideration (e.g. constraints, keys, indexes, etc.) to figure out the best possible algorithm.
In theory, this has been the main idea behind SQL and relational calculusfrom the beginning. In practice, SQL vendors have implemented highly efficient CBOs (Cost-Based Optimisers) only since the last decade, so stay with us in the 2010’s when SQL will finally unleash its full potential (it was about time!)
But you don’t have to do SQL to think in sets. Sets / collections / bags / lists are available in all languages and libraries. The main advantage of using sets is the fact that your algorithms will become much much more concise. It is so much easier to write:
SomeSet INTERSECT SomeOtherSet
rather than:
1
2
3
4
5
6
7
8
9
10
// Pre-Java 8
Set result = new HashSet();
for (Object candidate : someSet)
    if (someOtherSet.contains(candidate))
        result.add(candidate);
 
// Even Java 8 doesn't really help
someSet.stream()
       .filter(someOtherSet::contains)
       .collect(Collectors.toSet());
Some may argue that functional programming and Java 8 will help you write easier, more concise algorithms. That’s not necessarily true. You can translate your imperative Java-7-loop into a functional Java-8 Stream collection, but you’re still writing the very same algorithm. Writing a SQL-esque expression is different. This…
SomeSet INTERSECT SomeOtherSet
… can be implemented in 1000 ways by the implementation engine. As we’ve learned today, perhaps it is wise to transform the two sets intoEnumSet automatically, before running the INTERSECT operation. Perhaps we can parallelise this INTERSECT without making low-level calls toStream.parallel()

Conclusion

In this article, we’ve talked about optimisations done on the N.O.P.E. branch, i.e. deep down in a high-complexity algorithm. In our case, being the jOOQ developers, we have interest in optimising our SQL generation:
  • Every query is generated only on a single StringBuilder
  • Our templating engine actually parses characters, instead of using regular expressions
  • We use arrays wherever we can, especially when iterating over listeners
  • We stay clear of JDBC methods that we don’t have to call
  • etc…
jOOQ is at the “bottom of the food chain”, because it’s the (second-)last API that is being called by our customers’ applications before the call leaves the JVM to enter the DBMS. Being at the bottom of the food chain means that every line of code that is executed in jOOQ might be called N x O x P times, so we must optimise eagerly.
Your business logic is not deep down in the N.O.P.E. branch. But your own, home-grown infrastructure logic may be (custom SQL frameworks, custom libraries, etc.) Those should be reviewed according to the rules that we’ve seen today. For instance, using Java Mission Control or any other


This story, "Top 10 Easy Performance Optimisations in Java" was originally published by blog.jooq.org