Thursday, March 26, 2015

7 Tips To Avoid Getting Scammed On Internet

  Criminals are everywhere. With the fast growing internet community, number of internet scams is also growing very rapidly. Con artists and disrupted companies do and say anything to get what they want. You need to be very vigilant while using internet and communication with any third party on the internet. Online fraud cases are growing. Today we have listed 7 tips to avoid getting scammed on the internet.
online, fraud, scams, con artists, internet scams, internet spams, be skeptical, personal information, take your time, get in writing, malicious links, paid prize, free samples, money back.




1. Be Skeptical: 

You must be very cautious while making any deal on the internet. You should ask questions on phone calls, mail solicitation, email offers and even links on social media. Always remember that con artists are very smart and they know how their scams look. Even if some offer over phone call sound legitimate, do not trust it.

2. Guard Personal Information: 

Fraudsters use number of tricks to get you involved in the talks and they seek very personal information from you. They trick you to get your account number, some important passwords. They would send bogus emails designed to look like it’s from your bank. Always remember that, bank would never ask such sensitive information over phone call or email.

3. Take Your Time: 

Don’t let anyone rush you into making a purchase. Fraudsters always advertise in very lucrative way to make you fall for their fake offer and secretly get important information from you. Don’t fall for such offers. If somebody is offering a bargain, they would never high pressure you to make the purchase. Take your time to think before making the purchase.

4. Get It In Writing: 

Verbal promises have no count. Irrespective of what sales person tells you, the only thing that matters is what’s written down. You should be very careful while reading the sales contract too. It can have some small loophole you may not notice but, it would be good enough to get you in trouble.

5. Beware Of Links And Attachments: 

It;s so easy to click on a link in a text, email or social media. Most people do it unknowingly. Fraudsters count on curiosity, they might load malicious software on computer, smartphone or other computing device of yours. Always beware while clicking on some link in an email or advertisement. Scan attachments for virus before clicking on them.

6. Paid Prize: 

This is very common trick used by fraudsters. Most of the times, they offer some prize, they might mention that you have win a contest and there would be some lucrative offer to claim a prize. If contest is legitimate, you will never be required to buy anything or pay any money to claim your prize. Regardless of how interesting the offer might sound, never make any payment to claim some prize.

7. Free Merchandise Offer Or Money Back: 

Free is lucrative, everybody falls for free stuff. Always remember that, ads that offer to send you free sample of some product will never actually be free. There is always some mischief with the free or sample product offers. The initial first product may be free but then, you might have to pay ridiculous shipping charges or taxes. If you are asked to provide credit card details, they will charge you on monthly or annually basis. 

7 Tips To Mitigate Data Breaches

 Data breaches have grown a lot in last few years. Data breaches can be controlled and prevented. The awareness for preventing data breaches has grown too. Organisations are following several practices to control data breaches. Today we have listed seven tips to mitigate data breaches.
data breach, data breach mitigation, prioritize data protection, document your response process, make users part of process, understand business context, be thorough, proactively collect data, go with the flow


1. Prioritise Data Protection: 

Some level of prioritisation of data protection practices can be very effective. You can safeguard most important assets by prioritising data. Many security practices have become very general and they are rapidly spreading. Organizations spend lot of time in trying to protect everything, which is not possible in every case. Hence, It is way more effective to protect what’s vital and accept the fact that rest of the data can be compromised.

2. Document Your Response Process: 

There is a high demand for documenting the process of protection. This can help in following the set security measures. Stress level rises during security attacks. You get pulled in many directions, in such case, if you have documented process, you can avoid omission of key actions. The checklists can be of great help.

3. Make Users Part of The Process: 

The most forgotten aspect of incident response is to inform end-users. If some organisation’s data of user credentials gets stolen, it can impact end-users in greater way. It is IT team’s responsibility to inform the affected users so that they can change their passwords. It is important to make users part of the process.

4. Understand Business Context: 

Developers are required to take systems and applications offline for analysis. If developers are investigating a system for potential compromise, it is important to know what credential data is stolen. This is important to consider the business impact of the data breach. Organizations can easily leverage data loss prevention tools to map out important data flow.

5. Be Thorough: 

It is easy to find apparent source of malware in an attack. Developers can track attacker and find the source of malware and even eradicate it. However, you might miss some traces of it on your system. Developers should follow every piece of the evidence until they are sure that they have uncovered all of the attackers.

6. Proactively Collect Data: 

It is always a good practice to collect all the required data in advance. Developers should record correct logs for properly configured security system or packet traces from relevant network locations.

7. Go with the Flow: 

Packet analysis provides great visibility in network traffic. Number of packet capture required to cover potential targets and locations make it cumbersome and costly for packet analysis. Flow technologies like Netflow help in delivering performance metrics. Flow technologies provide up to 90 per cent visibility from packet analysis. 


courtesy:-nfytymes

7 Best Java IDEs For Programmers

IDEs are packed with great features and tools that enable easy and quick development. There are tons of features in IDEs that increase developer’s productivity. IDEs play vital role in providing user the opportunity to directly contribute to the advancement of IDE by developing code. Today we have listed 7 Best Java IDEs for Java developers.
java, development, IDEs, java programming, netbeans, eclipse, intelliJ, android studio, jDeveloper, BlueJ, jGRASP,  Android eclipse



1. Netbeans

Netbeans is a smarter and faster way to code. The IDE supports Java, PHP, C/C++ and HTML5 programing. The tool is available for Windows, OS X, Solaris and Linux operating system. Netbeans applications are developed using modules that can be extended by independent developers. Netbeans is an open source project that lets individuals and companies contribute to its development. The tool is available for free.

2. Eclipse

Eclipse is the most popular IDE for Java. It can be used from Web browser. The autocompletion feature of Eclipse is quite popular. Developers never have to refer to API documents. Eclipse has its own library of plug-ins. Developers can create their own plug-ins to customize Eclipse. The IDE is used by large number of Java developers in the world.

3. IntelliJ

IntelliJ is the most intelligent IDE out there. IntelliJ comes in two different editions. The ultimate edition is paid version while, the community edition is free and open source. This IDE is known for catching developers errors during editing process. IntelliJ saves time and increases productivity.

4. Android Studio

Android Studio by Google is an intelligent IDE based on IntelliJ. The tool is specially designed for Android developers. Android studio is available for Windows, Mac OS X and Linux. The trial version of Android Studio is free. Google has released the beta version of Android Studio, the stable version will replace Eclipse for Android.

5. jDeveloper


This is a Java IDE by Oracle corporation. jDeveloper is known for processing application development by producing the best coding environment. jDeveloper focuses on visual and declarative features that are most commonly used by application developers. jDeveloper helps in boosting developer’s productivity. The special features like code auditing, testing and profiling help developers in programming.

6. BlueJ

BlueJ is a free IDE that focuses on beginner Java developers. It is mostly used for educational purpose. The IDE doesn’t have great user interface. It focuses on objects for application that are under development. BlueJ has quite user-friendly and straightforward interface.

7. jGRASP

jGRASP is popular IDE for java. The tool is centered around visualization. jGRASP focuses on visualization of the application, it shows the visualized form of application that is being coded in real-time. The tool is available for free. It is supported by National Science Foundation. 

Get ready for the new stack

Virtualization may be the most successful technology ever to cross the threshold of the enterprise data center. Vastly better hardware utilization and the ability to spin up VMs on a dime has made virtualization an easy sell over the last decade, to the point where Gartner recently estimated that 70 percent of x86 workloads are virtualized.



Yet the fancy private cloud stuff on top of that virtualization layer has been slow in coming. Yes, virtualization management tools from VMware and Microsoft have enabled cloudlike behavior for servers and storage, and even OpenStack is finally getting a little enterprise traction -- but the advanced public clouds offered by Amazon, Google, IBM, Microsoft, and Rackspace deliver much more advanced autoscaling, metering, and self-service (not to mention hundreds of other services). Plus, the PaaS cloud layer for developing, testing, and deploying apps -- now offered by all major public clouds -- has found its way into relatively few enterprise data centers.
Then Docker roared onto the scene last year, offering a new cloud stack based on containers rather than VMs. Containers are much lighter weight than VMs and enable applications to be packaged and moved with ease, without the hassle of conventional installation. If VM-based clouds have stalled, and the new container-based stack offers such obvious advantages, will the new stack leapfrog its way into the enterprise to deliver a new private cloud?
Zorawar Biri Singh, former head of HP Cloud Services and now a venture partner at Khosla Ventures, thinks the triumph of the new stack is inevitable -- but we're still years away from enterprise adoption. Here's where he sees the bottlenecks:
First, for traditional enterprises and traditional production workloads, the current IT spend is focused on simplifying and managing the VM sprawl via converged solutions in the data center. Second, the new stack is still brittle and early. Real utility around containers, like hardened security, is still nowhere near adequate. Right now the new stack is a very good seeding ground for dev and test workloads. But the real friction point is that enterprise production-workload IT teams lack the devops orientation or agile IT backgrounds to be able to deploy and support distributed or stateless apps. One of the biggest issues is that there's just a huge skills gap in devops in traditional enterprise orgs.
On the other hand, says Singh, "certain dev teams and greenfield lines of business are already riding on this infrastructure." In such cases, either devops methods are already in place, or pioneering developers are handling the operations side of the container-based stack themselves.
Just as developers have driven the adoption of NoSQL databases, they're on the front lines of the new stack, downloading open source software and experimenting -- or turning to public clouds like EC2 or Azure that already support containers.
Why do developers like the new stack so much? In large part because containers are conducive to microservices architecture, where collections of single-purpose, API-accessible services replace monolithic apps. Microservices architecture enables developers to build applications that are more adaptable to new requirements -- and to create entirely new applications quickly using existing services.
John Sheehan, co-founder and CEO of the API monitoring and testing service Runscope, sees microservices as a "modernization" of SOA (service-oriented architecture). "The core responsibilities are largely the same," says Sheehan. "We want to distribute different parts of our software architecture across different systems and break it up not just by code boundaries but by service boundaries. That learning has carried over to microservices."
Microservices architecture relies on simpler, more developer-friendly protocols than SOA did -- REST as opposed to SOAPJSON as opposed to XML. Sheehan notes another key difference:
The types of microservices that we see and that our customers tend to use are very devops-driven. Internally, we deploy about 31 times a day at our company across all of our different services. We're 14 people and we have about 40 different services running internally. So big part of it is putting the necessary infrastructure in place so each team is able to independently deploy, scale, monitor and measure each service.
In such a scenario, the line between dev and ops blurs. Ops personnel write code to manage the infrastructure, essentially becoming part of the development team. "There's very little distinction between ops team and apps team," says Sheehan. In ops, "you happen to be coding against servers instead of coding against the service."
Singh believes the devops-intensive microservices approach might obviate the need for "formal" PaaS. Such PaaS offerings as Cloud Foundry or OpenShift offer predetermined collections of services and processes for building, testing, and deploying applications -- whereas, in the new stack, rich sets of API-accessible microservices can be embedded in every layer. Both dev and ops can plug into microservices up and down the stack, without the constraints imposed by PaaS.
A different kind of hybrid
 
Microservices architecture may leapfrog PaaS, but the entire new stack will not take root overnight. For example, Netflix is widely considered to have the most advanced microservices deployment anywhere, and it makes many prebuilt services available to the open source community as Docker images on Docker Hub -- but Netflix doesn't use Docker in production. Nor does Runscope, for that matter. Both use conventional VMs instead.
Despite the huge interest among developers in container-based solutions, it's early days. For one thing, the orchestration and management tools for containers, such as Mesosphere and Kubernetes, are still evolving. For another, it's not clear which container standard will win, with CoreOS posing a major challenge to Docker last December. The container-based stack may triumph eventually, but it's going to take a while.
"We see the most likely outcome is that containers and VMs will be used in combination," says Kurt Milne of the multicloud management provider Cliqr. That could mean running containers inside VMs -- or it could simply mean that new container-based stacks and VM-based stacks will run side by side.
This hybrid scenario opens an opportunity for VMware and others who have built management and orchestration for virtualization. In an interview with InfoWorld last week, VMware executive vice president Raghu Raghuram refused to view containers as a threat. Instead, he said:
We see containers as a way to bring new applications onto our platform. When developers or IT folks wonder what they need to run containers in a robust way, it turns out they need a layer of infrastructure underneath -- they need persistence, they need networking, they need firewalling, they need resource management and all those sorts of things. We've already built that. When you plop the container mechanism on top of this, then you can start to use the same infrastructure for those things as well.
We're seeing patterns where the stateless Web front end is all containers, and the persistence and the databases are all VMs. It's a mix of both. So now the question is: What is a common infrastructure environment and a common management environment? We see that as a tremendous opportunity for us.
Raghuram declined to say when VMware might extend its management tools to the container layer, but the implication is clear. It will be interesting to see how VMware's ops-oriented approach will be met by the developers who are driving today's container-based experimentation.
What's clear is that, despite the current excitement, the new stack will not supplant the existing one in some dramatic rip-and-replace wave. As with cloud adoption, the container-based stack will almost exclusively be used for dev and test first. The huge existing investment in virtualization infrastructure will not be thrown out of the data center window.
Nonetheless, the new container-based stack is a big leap forward in agility and developer control. Developers are discovering and adopting the tools they need to build out microservices architecture and to deliver more and better applications at a fantastic clip. As the pieces fall into place, and devops skills become ubiquitous, you can bet the new stack will take root as relentlessly as virtualization did.
This story, "Get ready for the new stack" was originally published by InfoWorld.

Modularity in Java 9: Stacking up with Project Jigsaw, Penrose, and OSGi

This article provides an overview of proposals, specifications, and platforms aimed at making Java technology more modular in Java 9. I'll discuss factors contributing to the need for a more modular Java architecture, briefly describe and compare the solutions that have been proposed, and introduce the three modularity updates planned for Java 9, including their potential impact on Java development.


Why do we need Java modularity?

Modularity is a general concept. In software, it applies to writing and implementing a program or computing system as a number of unique modules, rather than as a single, monolithic design. A standardized interface is then used to enable the modules to communicate. Partitioning an environment of software constructs into distinct modules helps us minimize coupling, optimize application development, and reduce system complexity.
Modularity enables programmers to do functionality testing in isolation and engage in parallel development efforts during a given sprint or project. This increases efficiency throughout the entire software development lifecycle.
Some characterizing attributes of a genuine module are:
  • An autonomous unit of deployment (loose coupling)
  • A consistent and unique identity (module ID and version)
  • Easily identified and discovered requirements and dependencies (standard compile-time and deployment facilities and meta-information)
  • An open and understandable interface (communication contract)
  • Hidden implementation details (encapsulation)
Systems that are built to efficiently process modules should do the following:
  • Support modularity and dependency-discovery at compile-time
  • Execute modules in a runtime environment that supports easy deployment and redeployment without system downtime
  • Implement an execution lifecycle that is clear and robust
  • Provide facilities for easy registry and discovery of modules
Object-oriented, component-oriented, and service-oriented solutions have all attempted to enable pure modularity. Each solution has its own set of quirks that prevent it from achieving modular perfection, however. Let's briefly consider.

Java classes and objects as modular constructs

Doesn't the object-oriented nature of Java satisfy the requirements of modularity? After all, object-oriented programming with Java stresses and sometimes enforces uniqueness, data encapsulation, and loose coupling. While these points are a good start, notice the modularity requirements that aren't met by Java's object-oriented framework: identity at the object level is unreliable; interfaces are not versioned: and classes are not unique at the deployment level. Loose coupling is a best practice, but certainly not enforced.
Reusing classes in Java is difficult when third-party dependencies are so easily misused. Compile-time tools such as Maven seek to address this shortcoming. After-the-fact language conventions and constructs such as dependency injection and inversion-of-control help developers in our attempts to control the runtime environment, and sometimes they succeed, especially if used with strict discipline. Unfortunately, this situation leaves the chore of creating a modular environment up to proprietary framework conventions and configurations.
Java also adds package namespaces and scope visibility to the mix as a means for creating modular compile-time and deployment-time mechanisms. But these language features are easily sidestepped, as I'll explain.

Packages as a modular solution

Packages attempt to add a level of abstraction to the Java programming landscape. They provide facilities for unique coding namespaces and configuration contexts. Sadly, though, package conventions are easily circumvented, frequently leading to an environment of dangerous compile-time couplings.
The state of modularity in Java at present (aside from OSGi, which I will discuss shortly) is most often accomplished using package namespaces, JavaBeans conventions, and proprietary framework configurations like those found in Spring.

Aren't JAR files modular enough?

JAR files and the deployment environment in which they operate greatly improve on the many legacy deployment conventions otherwise available. But JAR files have no intrinsic uniqueness, apart from a rarely used version number, which is hidden in a .jar manifest. The JAR file and the optional manifest are not used as modularity conventions within the Java runtime environment. So the package names of classes in the file and their participation in a classpath are the only parts of the JAR structure that lend modularity to the runtime environment.
In short, JARs are a good attempt at modularization, but they don't fulfill all the requirements for a truly modular environment. Frameworks and platforms like Spring and OSGi use patterns and enhancements to the JAR specification to provide environments for building very capable and modular systems. Over time, however, even these tools will succumb to a very unfortunate side-effect of the JAR specification JAR hell!

Classpath/JAR hell

When the Java runtime environment allows for arbitrarily complex JAR loading mechanisms, developers know they are in classpath hell or JAR hell. A number of configurations can lead to this condition.
First, consider a situation where a Java application developer provides an updated version of the application and has packaged it in a JAR file with the exact same name as the old version. The Java runtime environment provides no validation facilities for determining the correct JAR file. The runtime environment will simply load classes from the JAR file that it finds first or that satisfies one of many classpath rules. This leads to unexpected behavior at best.
Another instance of JAR hell arises where two or more applications or processes depend on different versions of a third-party library. Using standard class-loading facilities, only one version of the third-party library will be available at runtime, leading to errors in at least one application or process.
A full-featured and efficient Java module system should facilitate separation of code into distinct, easily understood, and loosely coupled modules. Dependencies should be clearly specified and strictly enforced. Facilities should be available that allow modules to be upgraded without having a negative effect on other modules. A modular runtime environment should enable configurations that are specific to a particular domain or vertical market, thus reducing the startup time and system footprint of the environment.

Modularity solutions for Java

Along with the modularity features mentioned so far, recent efforts add a few more. The following features are intended to optimize performance and enable extending the runtime environment:
  • Segmented source code: Source code separated into distinct, cached segments, each of which contains a specific type of compiled code. The goals of which include skipping non-method code during garbage sweeps, incremental builds, and better memory management.
  • Build-time enforcements: Language constructs to enforce namespaces, versioning, dependencies, and others.
  • Deployment facilities: Support for deploying scaled runtime environments according to specific needs, such as those of a mobile device environment.
A number of modularity specifications and frameworks have sought to facilitate these features, and a few have recently risen to the top in proposals for Java 9. An overview of Java modularity proposals is below.

JSR (Java Specification Request) 277

Currently inactive is Java Specification Request (JSR) 277, the Java Module System; introduced by Sun in June of 2005. This specification covered most of the same areas as OSGi. Like OSGi, JSR 277 also defines discovery, loading, and consistency of modules, with sparse support for runtime modifications and/or integrity checking.
Drawbacks to JSR 277 include:
  • No dynamic loading and unloading of modules/bundles
  • No runtime checks for class-space uniqueness

OSGi (Open Service Gateway Initiative)

Introduced by the OSGI Alliance in November of 1998, the OSGI platform is the most widely used modularity answer to the formal standard question for Java. Currently atrelease 6, the OSGi specification is widely accepted and used, especially of late.
In essence, OSGi is a modular system and a service platform for the Java programming language that implements a complete and dynamic component model in the form of modules, services, deployable bundles, and so on.
The primary layers of the OSGI architecture are as follows:
  • Execution environment: The Java environment (for example, Java EE or Java SE) under which a bundle will run.
  • Module: Where the OSGi framework processes the modular aspects of a bundle. Bundle metadata is processed here.
  • Life-cycle: Initializing, starting, and stopping of bundles happens here.
  • Service registry: Where bundles list their services for other bundles to discover.
One of the biggest drawbacks to OSGi is its lack of a formal mechanism for native package installation.

JSR 291

JSR 291 is a dynamic component framework for Java SE that is based on OSGi, is currently in the final stage of development. This effort focuses on taking OSGi into mainstream Java, such as was done for the Java mobile environment by JSR 232.

JSR 294

JSR 294 defines a system of meta-modules and delegates the actual embodiment of pluggable modules (versions, dependencies, restrictions, etc.) to external providers. This specification introduces language extensions, such as "superpackages" and hierarchically-related modules, to facilitate modularity. Strict encapsulation and distinct compilation units are also part of the spec's focus. JSR 294 is currently dormant.

Project Jigsaw

Project Jigsaw is the most likely candidate for modularity in Java 9. Jigsaw seeks to use language constructs and environment configurations to define a scalable module system for Java SE. Primary goals of Jigsaw include:
  • Making it very easy to scale the Java SE runtime and the JDK down to small devices.
  • Improving the security of Java SE and the JDK by forbidding access to internal JDK APIs and by enforcing and improving the SecurityManager.checkPackageAccessmethod.
  • Improving application performance via optimizations of existing code and facilitating look-ahead program optimization techniques.
  • Simplifying application development within Java SE by enabling libraries and applications to be constructed from developer-contributed modules and from a modular JDK
  • Requiring and enforcing a finite set of version constraints

JEP (Java Enhancement Proposal) 200

Java Enhancement Proposal 200 created in July of 2014, seeks to define a modular structure for the JDK. JEP 200 builds on the Jigsaw framework to facilitate segmenting the JDK, according to Java 8 Compact Profiles, into sets of modules that can be combined at compile time, build time, and deploy time. These combinations of modules can then be deployed as scaled-down runtime environments that are composed of Jigsaw-compliant modules.

JEP 201

JEP 201 seeks to build on Jigsaw to reorganize the JDK source code into modules. These modules can then be compiled as distinct units by an enhanced build system that enforces module boundaries. JEP 201 proposes a source-code restructuring scheme throughout the JDK that emphasizes module boundaries at the top level of source code trees.

Penrose

Penrose would manage interoperability between Jigsaw and OSGi. Specifically, it would facilitate the ability to modify OSGi micro-kernels in order for bundles running in the modified kernel to utilize Jigsaw modules. It relies on using JSON to describe modules.

Plans for Java 9

Java 9 is a unique major release for Java. What makes it unique is its introduction of modular components and segments throughout the entire JDK. The primary features supporting modularization are:
  • Modular source code: In Java 9, the JRE and JDK will be reorganized into interoperable modules. This will enable the creation of scalable runtimes that can be executed on small devices.
  • Segmented code cache: While not strictly a modular facility, the new segmented code cache of Java 9 will follow the spirit of modularization and enjoy some of the same benefits. The new code cache will make intelligent decisions to compile frequently accessed code segments to native code and store them for optimized lookup and future execution. The heap will also be segmented into 3 distinct units: non-method code that will be stored permanently in the cache; code that has a potentially long lifecycle (known as "non-profiled code"); and code that is transient (known as "profiled code").
  • Build-time enforcements: The build system will be enhanced, via JEP 201, to compile and enforce module boundaries.
  • Deployment facilities: Tools will be provided within the Jigsaw project that will support module boundaries, constraints, and dependencies at deployment time.

Java 9 early access release

While the exact release date of Java 9 remains a mystery, you can download an early access release at Java.net.

In conclusion

This article has been an overview of modularity within the Java platform, including prospects for modularity in Java 9. I explained how long-standing issues like classpath hell contribute to the need for a more modular Java architecture and discussed some of the most recent new modularity features proposed for Java. I then described and contextualized each of the Java modularity proposals or platforms, including OSGi and Project Jigsaw.
The need for a more modular Java architecture is clear. Current attempts have fallen short, although OSGi comes very close. For the Java 9 release Project Jigsaw and OSGi will be the main players in the modular space for Java, with Penrose possibly providing the glue between them.

Windows 10 puts biometric security front and center

Windows 10 will provide a leap in biometric capabilities for the PC, built right into the operating system (in what Microsoft calls Windows Hello) and supported through Active Directory authentication. You'll able to access your Windows devices -- compatible ones, that is -- using your face, iris, or finger.
windows phone biometrics security eye fingerprint
We've already seen Apple using fingerprint scanning in recent iOS devices, and the Android platform has supported facial recognition, finger-drawn patterns, and (on recent Samsung devices) fingerprint scanning for the last couple years. But despite dalliances with fingerprint readers in laptops a decade ago, biometric security has not been common in computers.
Touted as the future of secure device and application access, biometric authentication provides much better security than a password written on a sticky note and shoved under your keyboard. Complex passwords and constant password-change requirements aid corporate security, but they sure make it hard on the user who has to create and remember all those passwords. Having technology that can validate that you are you, even in the dark through infrared camera technology as Windows 10 will support, is much better for the user.

Microsoft's approach to biometric security in Windows 10

But is such biometric security actually secure? Microsoft says the Windows Hello feature will be enterprise-grade and meet strict security requirements of the government, defense, finance, and health care industries.
Windows 10's Passport API will let developers build applications and secure websites that are authenticated through a PIN or Windows Hello biometric authentication. The Windows Hello biometric signature itself is stored on the device itself and is shared with no one -- the same approach Apple uses for its Touch ID technology on iOS devices.
Windows Hello is not meant as a cross-network authentication mechanism; it's just for local access top the device and to Passport-enabled applications and websites -- again, similar to Apple's Touch ID. Windows Hello will require specialized hardware, so you'll need new PCs or mobile devices to take advantage of it, just as has been true in the iOS and Android worlds as those platform added biometric capabilities.

Biometrics beyond Windows authentication

There are lots of great uses for biometrics beyond authentication, and you don’t necessarily have to wait for Windows 10 to benefit from some of those uses.
For example, the Biomids Instant-In Proctor application uses facial recognition to authenticate a person registered to take an assessment and confirm that the person registered didn’t get help from an outside source. It's a great way to prevent cheating on assessments.
Vision-Box's system uses facial recognition technology at airport check-in points and gates. The system takes a photo of passengers when they check in and get their passports validated. That information is then relayed to security screeners and government agencies.
Another example is Intercore's Driver Alertness Detection System. It monitors a driver's alertness level in real time, then notifies a driver (and third parties) when he or she appears to be drowsy to reduce the risk of accidents caused by drowsiness and fatigue. The system monitors 524 points of the driver’s eyes, face, and head to determine the driver's alertness level.
Of course biometrics is a key factor in the creation of next-gen mobile payment systems, such as Apple Pay -- no more credit cards, only mobile biometric payment systems.
Both the hardware and the operating system need to advance to make biometrics more secure. Windows 10 is where Microsoft is making its advances.

This story, "Windows 10 puts biometric security front and center" was originally published by InfoWorld.