Saturday, December 22, 2018

What is Shell Scripting?

In this post you will learn about shell Script and why to write Shell Script. Normally shells are interactive. It means shell accept command from you (via keyboard) and execute them. But if you use command one by one (sequence of 'n' number of commands) , the you can store this sequence of command to text file and tell the shell to execute this text file instead of entering the
commands. This is know as shell script.

Shell Script is series of command written in plain text file. Shell script is just like batch file is MS-DOS but have more power than the MS-DOS batch file."

Why to write shell script?

  1. Shell script can take input from user, file and output them on screen.
  2. Useful to create our own commands.
  3.    Save lots of time.
  4.    To automate some task of day today life.
  5.    System Administration part can be also automated



Advantages of shell scripts
  • The command and syntax are exactly the same as those directly entered in command line, so programmer do not need to switch to entirely different syntax
  • Writing shell scripts are much quicker
  • Quick start
  • Interactive debugging etc.

Disadvantages of shell scripts

  • Prone to costly errors, a single mistake can change the command which might be harmful
  • Slow execution speed
  • Design flaws within the language syntax or implementation
  • Not well suited for large and complex task
  • Provide minimal data structure unlike other scripting languages. etc



Friday, December 21, 2018

Difference Between Data and Information

In this post you will learn what is data and information and difference between them. Many students and freshers confused differences between these. Dat,a is the result of measurements of various attributes of entities such as product, student, inventory item and employee.
The measurements may be recorded in alphabetical, numerical,image, voice or other forms. Thus, the raw and unanalysed numbers and facts about entities constitute data. On the other hand information results from data when they are organised or structured in some meaningful ways. The processed data have to be placed in a context for have them to derive meaning and relevance. Relevance in turn adds to the value of information in decisions and actions. Data processing requires some infusion
of intelligence ( meaning, purpose and usefulness) into data to generate information. The application of intelligence may be in the form of some principles, knowledge, experience and intuition to convert data into information.

Definition of Information

The term 'information' is a very common word and it conveys some meaning to the recipient. It is very difficult to define it comprehensively. Yet, Davis and Olson 1 give a fairly good definition. They define information as "data that has been processed into a form that is meaningful to the recipient
and is of real or perceived value in current or prospective actions or decisions".
This implies that information is: Processed data
It has a form

  1.  It is meaningful to the recipient
  2. It has a value, and,
  3. It is useful in current or prospective decisions or
  4. actions.




Differences between data and information

Though the words 'data' and 'information' are often used interchangeably, there is clear distinction between the two.

Some of the major differences are as follows:

  1. Data are facts but information, though based on data,is not fact.
  2. Though information arises from data, all data do not become information. There is a lot of selective filtering of data before processing them into information.
  3. Data are the result of routine recording of events and activities taking place. Generation of information is user-driven which is not always automatic.
  4.  Data are independent of users whereas information is user dependent. Most information reports are designed to meet anticipated information needs of a user or a group of users. That is, information for one user is very likely to be data for other users.

What is Booting?

In this post you will learn what is Booting and types of Booting. In computing, booting is a bootstrapping process that starts operating systems when the user turns on a computer system. A boot sequence is the set of operations the computer performs when it is switched on which load an operating system. Everything that happens between the times the computer switched on and it is ready to accept commands/input from the user is known as booting.




The process of reading disk blocks from the starting of the system disk (which contains the
Operating System) and executing the code within the bootstrap. This will read further information off the disk to bring the whole operating system online. Device drivers are contained within the bootstrap code that support all the locally attached peripheral devices and if the computer is connected to a network, the operating system will transfer to the Network Operating system for the "client" to log onto a server The Process of loading a computer memory with instructions needed for the computer to operate.The process and functions that a computer goes through when it first starts up, ending in the proper and complete loading of the Operating System. The sequence of computer operations
from power-up until the system is ready for use

COLD BOOTING:
The cold booting is the situation, when all the computer peripherals are OFF and we start the
computer by switching ON the power.

WARM BOOTING:
The warm booting is the situation, when we restart the computer by pressing the RESET button
and pressing CTRL+ ALT + DEL keys together.


Thursday, December 20, 2018

What are the Prerequisites to learn DevOps?

In this post you will learn the prerequisites to learn DevOps. DevOps, as you know is Dev+Ops. So, technically you must know how Development and Operations work.

However, if you look at a bigger picture, DevOps is not just automating Ops to help Dev. Its more about adding value to the organization, by means of adopting “DevOps Culture”

DevOps is basically a culture. You can say software engineering culture. Aim or objective of DevOps is unifying software development (Dev) and software operation (Ops). DevOps is a broad area and it involves many tools at different stages/phases. This culture minimizes the gap between developers and business operations by providing collaboration layers to both. Developers in DevOps want continuous innovation and product enrichment whereas Ops department oversees costing and delivery. It is basically a way of implementing the development and operations together. There is a single team that is collaborating with each other at every phase whether it is development, testing, deployment or operations. Prerequisites to learn DevOps involve:



This way, there are certain per-requisites to become a DevOps.

  • Knowing your Tech-stack, be it OS, DB, Middleware etc which includes Linux/Windows, Tomcat/Weblogic, Apache/Nginx etc.
  • Having know-how of Build and Deployment process. What to build, how to build, how to deploy etc.
  • Some knowledge about daily Ops activities such as restarts, maintenance, backups etc
  • From tool/technology point of view, there are no “defined per-requisites”.
  • However, in most of the cases, basic knowledge of Jenkins, Ant/Maven, Java, Shell/Python/Ruby is required and some knowledge about Docker/Cloud (AWS), Chef/Puppet etc is an added plus.
  • Non-technically speaking, you must know how to add value. How to speed up release cycle from Dev to QA to Prod. Yes, this involves automation at each and every level. What you should know is the final goal and how you divide it into pieces is more important. Thats why it is essential to know end-to-end release cycle of you organization which includes personnel from various teams/departments such as Dev, QA, Project Mgmt, Prod Support and many more.
  • Familiarity of concepts like CI, CD, Release Engineering are a pre-requisite. And so, should have experience on tools like Jenkins, Bamboo, CircleCI, etc.


A DevOps person should not only be technical-oriented. As you add more value to organization, you must posses some qualities such as good communication skills, a good vision to plan and execute, leadership qualities and what not.

Tuesday, December 18, 2018

Is DevOps a Good Career?

In this post you will know whether DevOps is good for career or not.It is really depends on what you are passionate about. I would choose passion over all other considerations or else you will burn out and the money won’t help (I’ve been there). Any position you pursue in tech is going to require your complete dedication to achieve success.

However, I think the DevOps area has a lot of growth potential in the future. While I can not speak to your interests I can tell you why I am drawn to the field after being a web based software developer and a data driven research programmer.

I have learned through experience in large and small organizations that if you are operations staff you’d better be automating your job, and if you are a developer you have to face the inevitability of getting down and dirty with operations if you are to stay relevant. Developers who won’t administer/monitor, and admins who won’t develop will increasingly become less and less valuable to organizations needing to stay competitive.

DevOps is exciting because you are always working with and integrating new technologies and solving new challenges. Essentially your job is to find a happy balance between operations and developers. This relationship is delicate and can blow up if not regulated. As a devops specialist your job is to integrate these two different mindsets. This requires that aspects of IT be securely shared so that you don’t have the blame game (which I myself have been a party to). Developers need to continually push code and operations want to keep everything running smoothly. The more integrated the systems and processes in use, the easier it is for each to do their job.

I personally like to think of IT as three separate phases that all contribute to the ultimate success of the enterprise tech ecosystem; packaging, automation, and scaling.

Packaging:

DevOps is great if you like to explore and work with a variety of technologies and processes. I think one of the first things to consider is the packaging of IT that the tech teams use to provide the organizations products and services. The better packaged and more maleable the packaging the easier it is to keep everything standardized and reusable.

If you like playing with configuration management systems (Puppet, Chef, Ansible, etc…) and digging into imaging systems such as Docker you will like DevOps. I would caution that it is VERY important to create highly configurable packaging of the IT systems in use so that they can evolve as the organization’s needs change. This also makes it easier to modify for production, QA, staging, and development environments.



If you think about it, the amount of new technologies and services being released into the market is growing exponentially (especially with the addon potential of all the open source frameworks in existence). In DevOps no technology is off limits and you find yourself continuously working with, integrating, and automating different technologies. As the amount of tech and services grow, so to is the demand for people who can put it all together into golden images (configuration managed images on different environments).

My personal goal is to create machines as machine manageable data objects that are completely hands off on the production and QA environments. The goal is to allow programs written by different teams to efficiently automate as much as possible without needing to login to the machine. To me this is fun, and I think if you like variety and being a middleman (glue) you would like it.

Automation:

Your automation potential is only as good as your ability to package the infrastructure in a form machines can work with effectively. If you come from a development background you most likely have had to deal with brittle environments (at least in testing new technologies, which you should be continuously doing).

The DevOps specialist makes it easy for programmers and operations to automate their jobs so that we don’t have to reinvent the wheel over and over again. Ultimately, if the automation is good enough we can realize a scalable architecture (which is the end goal).

You should like scripting a lot. You don’t need to be the best programmer to accomplish this but the more integrated your approach the easier it will be to build on your previous work (which I like). Automation brings the machines to life and if you like seeing a bunch of moving pieces come together to achieve some measurable outcome you will like this part of the job.

I would recommend that you know at least one glue language; Ruby, Python, Go, etc… The more flexible the language, the better. Although, the beauty of automation is that many different languages can be brought together to create a unified system. If something needs to be built for speed, it’s easy enough to design that part in a language like C or Go while allowing other tasks that need more flexibility to be written in a higher level language. You definitely want to become very good at shell scripting which many times ties everything together.

There are two primary types of automated systems you will be developing; fire and forget scripts, and continuously running daemons (or agents). You should know when to apply each.

I personally can not see automation becoming less in demand in the future. The promise of the cloud is built on automation, and enterprise usage of the cloud is growing rapidly throughout organizations of all sizes and types.

Scaling:

If reusability is a passion of yours, I think you would definitely like DevOps. I believe the biggest factor in the successful tech organizations of the future will be their ability to scale rapidly while being able to deflate when not needed to minimize costs in downtime. Customers want speed. They don’t care about the tech behind the application as long as the application is reliable, zippy, and meets their needs.

If you can create packages of IT that can be easily automated in a portable fashion then I think you will have great prospects in the tech world in the future. Companies like Google and Facebook would never have gotten as popular as they are if they had not learned to scale their IT effectively.

Scalability is not easy to achieve and many would rather not have to worry about it, which explains the growth of scalability as a service offerings. But somebody has to know how. Think about the problems of the future; Data analysis, AI, internet of things, mobile consumption, scalable web driven apps, etc… While all of these tech areas require different skills to develop on their own, each is absolute garbage without the same fundamental building blocks. Want to jump from mobile to AI? DevOps could allow that. Want to play with that new SaaS service that is all the rage these days? DevOps can allow that.

DevOps is about being the glue that holds everything and everyone together, and to me that is what makes it so exciting. The possibilities are limitless and the technologies are always growing and evolving. And if you don’t focus on DevOps, you will still have to manage infrastructure as a developer anyway (even if only for yourself).

When I first started programming I started with a passion for machine learning in C, then over time I started creating Drupal sites for organizations large and small. Over time I felt suffocated by the limited nature of the technologies people expected me to work with day in and day out. It wasn’t that I did not like the technologies, but I felt like I was in single technology hell. And once you gain a lot of experience in a specific technology or system people just expect you to focus on that area; recruiters, managers, developers, everyone… With DevOps variety is part of the job description so if you ever feel trapped by technology and find yourself looking to the stars wondering what the hell did I get myself into, DevOps can free you from that limited mindset.

That is why I got into DevOps. I certainly don’t claim to be the best or the most experienced out there, but I am a heck of a lot happier these days. And I am constantly learning new things that I can apply to any new project, whether it be a new AI platform or a mobile application. I can spread my wings and I believe since you are asking this question, maybe you want to as well.

At the end of the day your happiness and the passion you feel for what you do is all that really matters.

Saturday, December 15, 2018

Top 5 Technologies to learn in 2019

In this post you will know the best programming languages/technologies should learn in 2019 to survive in IT industry with high Package.In present IT,it is very difficult to determine one particular technology as the best among others,because everyday is a evolution in computing and every single paves a way for a new technology.

But,according to the present job scenario and stack overflow popularity,the below technologies have good growing opportunities:

1. Artificial Intelligence:


It covers technologies that are used for prediction purpose.The technology stack of AI constitutes

Machine Learning
Deep Learning
Computer Vision
Human Computer Interaction
Robotics

2. Data Science:

Data Science is all about cleaning,analyzing,organizing,preparing and visualizing the data.



It requires the following things to be included:

Statistics
Machine Learning
Data mining
Data Analytics

3. Big Data and Cloud Computing:


These are another boom areas to be considered as the trending technologies in the present IT sector . It is because of the importance of data in the life of every individual and consistent improvement in social networks and eCommerce traffic.

4. Android Development:



As the internet users are more comfortable with using android apps than websites,the demand of android development becomes very high.The two popular ways of building android apps are through:

Java
Kotlin

5. DevOps:


Devops is the combination of Development and operations team in a software organization,which is advanced version to agile development.


Thursday, December 13, 2018

Difference Between checked Exception and Unchecked Exception in Java

In this post you will learn what is checked and unchecked Exceptions and difference between them. The following are the differences between them.

Checked: are the exceptions that are checked at compile time. If some code within a method throws a checked exception, then the method must either handle the exception or it must specify the exception using throws keyword. 

Example: FileNotFoundException, EndOfFileException etc.

So the compiler at compile time will check if a certain method is throwing any of the checked exceptions or not,if yes it will check whether the method is handling that exception either with “Try&Catch” or “throws”,if in case the method is not providing the handling code then compiler will throw error saying “Unreported Exception”

For example

import java.io.*
class Example
{
public static void main(String[] args)
 {
PrintWriter pw=new PrintWriter("xyz.txt"); //creating a new printwriter object to write in a file named "xyz.txt"
pw.println("Hello World");
 } }

The above snippet is supposed to print “Hello World” in file named “xyz.txt”

There may be a chance of file “xyz.txt” is not present in specified directory,so the compiler will check if any handling code is provided in case file is not present

In above snippet handling code is not provided either with “Try&Catch” or “throws” so the compiler will throw error

The same example with some modification:

import java.io.*
Class Example
{
public static void main(String[] args) throws FileNotFoundException
 {
PrintWriter pw=new PrintWriter("xyz.txt"); //creating a new printwriter object to write in a file named "xyz.txt"
pw.println("Hello World");
}}
In this example handling code is provided with “throws” so compiler will not throw any error.



Unchecked Exceptions:

There are some exceptions which do not occur regularly in a program,and compiler will not check for those exceptions,these kind of exceptions are called Unchecked Exceptions

Example: ArithmeticException, NullPointerException etc

For example:

class Example{
public static void main(String[] args){
System.out.println(10/0); //Arithmetic Exception
}
The above program should throw “Arithmetic Exception” as division with “0” is not allowed

In this case the program compiles fine because compiler will not check for “Unchecked Exceptions” but the program will throw error at “Run Time” as division with “0” is illegal.

Monday, December 10, 2018

Highest Paying Companies for Data Scientist

In this post look at some of the highest paying companies for Data Scientist.A data scientist is a professional responsible for collecting, analyzing and interpreting large amounts of data to identify ways to help a business improve operations and gain a competitive edge over rivals.

These are some of the top paying companies for data scientist-

Amazon
Microsoft
IBM
Google

So there you have the big names in data science, but if you’re planning to start your career at one of them, I would rather suggest you to drop the big names like ones given above and rather concentrate on smaller companies and startups. The reason why I am saying this is that big product based companies look for a lot when it comes to hiring their data scientist.

Apart from skills and technological know how you need to have a lot of experience working as a data scientist over the trending technologies. I do not mean to demotivate you but it would be extremely difficult to get placed at one of them. So, I would rather suggest you to aim for smaller companies and startups like-

Zomato
Karvy Analytics
Mu Sigma
Social Cops
MobiKwik

These are few among the popular startups that hire data scientist at good salaries. Apart from these, there are some leading product based companies like Zoho, Freshdesk, Wingify, Haptik, etc. that can offer a great start to you data science career.



So, how can one get into these companies as a fresh data scientist?

Although these are smaller companies and startups, but getting hired at them too would not be too easy. In fact, some of these companies have quite strict hiring practices. But such companies provide good learning exposure, culture, growth opportunities and decent salary packages. To be precise, the starting salary package for fresh data scientist at such companies is 7 L.P.A and above. So, what do these companies seek in their data scientists?

To get hired at one of these companies as a data scientist, you need to have strong logical and analytical skills, reasoning abilities, and of course decent soft skills. Apart from these skills, you need to have the a great hold over the tools and technologies that are being used currently in data science.

So, what are those tool and technologies that you should know?

Basic knowledge of Statistics and Statistical analysis
R & Python
In depth knowledge of machine learning algorithms like Logistics regression, Linear Regression, etc.
Knowing tools like Rapid Miner, Map Reduce and Tableau would be an added advantage for you. Also, learn data cleaning, data mining, Visualization and deployment so as to get placed at a good job in data science.

Once you have learned them all, move on to working on projects. But why?

As you already know, good product based companies and startups usually prefer to hire experienced professionals. This way they can be sure of the competency of the candidate. But being a fresher it might be difficult for to get hired.

The best thing that can prove your skills and competency at this point is-Projects. Pick up a lot of challenging real data projects and build a strong portfolio through them. Remember, the better portfolio you have, the higher number of good opportunities you would get. For this, you can pick up real data projects from platforms like Kaggle, etc. Here, you would get to work on public data sets. Summing this up, for getting a good paying job, follow this approach-

Acquire the skills in relevant tools and technologies that are used by good product based companies
Start working on projects and build a strong portfolio using them
Start applying for jobs at smaller product based companies and startups like Zomato Karvy Analytics, Mu Sigma, Social Cops, MobiKwik, etc and eventually get placed.
Also, once you get placed at one of such companies, gain a few years of experience and you can always make a switch to a better product based company.

Did you know that data scientist with 5 years of experience are getting as high as 75 Lakhs per annum currently? So, no doubt you do have great earning opportunities in this field.

If you are yet to start learning these tools and technologies, here are a few platforms that you can use-

Edureka - Here, you would get to learn data science skills and tools through pre-recorded video lectures. You also get certificates here but if your goal is to get placed at a good paying job, I am not sure whether this platform would help you or not.

Simplilearn - Here, too you would get skill learning courses and certifications but let me tell you good product based companies look out for candidates having a good knowledge and hands on experience/practical exposure rather than meagre certifications.

edWisor - Here, you would get to acquire skills and technological know how through a specifically curated career path in Data Science. You would also get to work on real data projects here and also get job assurance that I don’t think any other platform provides. A lot of product based startups and companies hire their data scientist from here on the basis of projects individuals do here.

Saturday, December 8, 2018

How to reduce Execution time of Java Program?

In this post you  will learn how to reduce execution time of Java Program. The following are the some of the best practices for java programmer. 

1.Always return empty Collections and Arrays instead of null

Whenever your method is returning a collection element or an array, always make sure you return empty array/collection and not null. This will save a lot of if else testing for null elements. 
2. Avoid creating unnecessary objects and always prefer to do Lazy Initialization:

Object creation in Java is one of the most expensive operation in terms of memory utilization and performance impact. It is thus advisable to create or initialize an object only when it is required in the code.
-------------------------------------------------------------------
public class ABC {

    private List abc;
     
    public List getABC() {
         
        if(null == abc) {
            abc = new ArrayList();
        }
        return abc;
    }
}
----------------------------------------------------------------------------
3.Always try to minimize Mutability of a class:

Making a class immutable is to make it unmodified. The information the class preserve will stay as it is through out the lifetime of the class. Immutable classes are simple, they are easy to manage. They are thread safe. They makes great building blocks for other objects.
However creating immutable objects can hit performance of an app. So always choose wisely if you want your class to be immutable or not. Always try to make a small class with less fields immutable.To make a class immutable you can define its all constructors private and then create a public static method to initialize and object and return it.





4.Try to use standard library instead of writing your own from scratch

Writing code is fun. But it is very advisable to use an existing standard library which is already tested, debugged and used by others. This not only improves the efficiency of programmer but also reduces chances of adding new bugs in your code. Also using a standard library makes code readable and maintainable.

5.Wherever possible try to use Primitive types instead of Wrapper classes

Wrapper classes are great. But at same time they are slow. Primitive types are just values, whereas Wrapper classes are stores information about complete class.

6.Use Strings carefully.

Always carefully use Strings in your code. A simple concatenation of strings can reduce performance of program. For example if we concatenate strings using + operator in a for loop then every time + is used, it creates a new String object. This will affect both memory usage and performance time.

7 Algorithms Implementation
This is what you have to work on your own. There is one quote which is rarely false: "There is always a better algorithm for this task." Recursion makes your program very very slow, you can use recurrence relations to solve problems of recursion if you can't do it by some other way. Learn DP, and number theory the hard way, because for a mathematical question, you can make program fast using number theory tricks and for string/text type of questions DP gives memoized solutions.

How to Execute a Java Program?

In this post you will understand how to execute a java program in step by step. The following are the step by step process to execution a Java Program.

  1. Whenever, a program is written in JAVA, the javac compiles it.
  2. The result of the JAVA compiler is the .class file or the bytecode and not the machine native code (unlike C compiler).
  3. The bytecode generated is a non-executable code and needs an interpreter to execute on a machine. This interpreter is the JVM and thus the Bytecode is executed by the JVM.
  4. And finally program runs to give the desired output.

Java program execution follows 5 majors steps:

Editor
Compile
Load
Verify
and Execute

1. Editor - Here the programmer uses a simple editor or a notepad application to write the java program and in the end give it a .java extension

2. Compile - In this step the programmer gives the javac command and the .java files are converted into bytecode which is the language understood by the java virtual machine (and this is what makes java platform independent language). Any compile time errors are raised at this step

3. Load - The program is then loaded into memory. This is done by the class loader which takes the .class files containing the bytecode and stores it in the memory. The .class file can be loaded from your hard disk or from the network as well

4. Verify - the bytecode verifies checks if the bytecode loaded are valid and do not breach java's security restrictions

5. Execute - The JVM interprets the program one bytecode at a time and runs the program

Can i Switch Jobs Every Year in Software Career?

In this post you will know the best thing every software developer think about this question like can i switch Jobs every year in my software career. Yes of course… you can… The job market is completely changed. Companies are not getting talented people.Ask any recruiter, they will tell their story. Every good candidate ends up with minimum 3–4 job offers when he look for change. Thus, in this situation, HRs, Managers are overlooking your number of companies. Actually its getting well understood that people who are sticking to companies for 5+ yrs are those who can’t get offers. Having said that Few tips need to be noted.
  • Job Hopping can be done to limited time only. Finally, one has to settle down. This period can easily be stretched to experience of 10 yrs in today software development. Later, you will be moving to either architect /manager roles which demand stability.
  • Try to add Big names like to your CV. That is a must for a job hopper. Because when things are not right, they will save you.
  • Have solid excuse for every switch. If you are climbing ladder with improvement in brand, that would be best.
  • Have solid command on your skill set and confidence. Try to prove your 1 yr worth than in comparison to existing employees. Be ready with your detailed contribution in every old company.
 From what I’ve seen, I would suggest that you don’t.

Here’s what I see from people who change jobs frequently and in a short time:

They think they have a lot of knowledge in various technologies because they’ve seen and worked in parts or sections of that technology in their career. Unless they’ve continued working on that same technology in other jobs, they have superficial knowledge. It takes time to really understand something.



They don’t understand the idea of maintenance. They create codes that aren’t very maintainable. I mean, why would they. They won’t stay there long enough to care about maintainability and future proofing for 5 years or so.

They don’t create codes that are testable. They don’t do TFD. It takes effort to create the tests and they’re more interested in creating and then leaving, so why would they bother.

They may not have had the opportunity to create a mid to large project from scratch. Creating a new project is different from working on an existing project. It’s a different beast and I’m not talking about the coding. It takes design, planning, and constant back and forth between managers, engineers, owners, and users. They have to create a language based around the project for efficient communication between all parties. They have to think about the UX. They have to think about security. They have to think about resource availability. They have to think about training and IT support.

And, after many years of job hopping, they don’t get job offers anymore. And, they’re not happy about their workplace. They never seem to be.

So, please for your sake do not change jobs every year. I would suggest once every three years until you find a really nice place. But, if you do want to change jobs frequently then try consultant work. It’s more excusable when managers look at your resume.

There are companies that specialize in hiring and managing consultants. And, some people are very happy with consultant jobs.They don’t have to deal with office politics and worry about competing for that team lead or PM job. And, they get to meet different managers and engineers, and they can create a strong network.

Best Unknown features of Java

In this post you will learn some of the unknown features of Java.Java is one of the most popular and widely used programming language and platform. A platform is an environment that helps to develop and run programs written in any programming language.
Java is fast, reliable and secure. From desktop to web applications, scientific supercomputers to gaming consoles, cell phones to the Internet, Java is used in every nook and corner.

The following are the few unknown features of Java:

1. Stamped Locks

Multi-threaded code has long been the bane of server developers (just ask Oracle Java Language Architect and concurrency guru Brian Goetz). Over time complex idioms were added to the core Java libraries to help minimize thread waits when accessing shared resources. One of these is the classic ReadWriteLock that lets you divide code into sections that need to be mutually exclusive (writers), and sections that don’t (readers).

On paper this sounds great. The problem is that the ReadWriteLock can be super slow(up to 10x), which kind of defeats its purpose. Java 8 introduces a new ReadWrite lock – called StampedLock. The good news here is that this guy is seriously fast. The bad news is that it’s more complicated to use and lugs around more state. It’s also not reentrant, which means a thread can have the dubious pleasure of deadlocking against itself.

StampedLock has an "optimistic" mode that issues a stamp that is returned by each locking operation to serve as a sort of admission ticket; each unlock operation needs to be passed its correlating stamp. Any thread that happens to acquire a write lock while a reader was holding an optimistic lock, will cause the optimistic unlock to be invalidated (the stamp is no longer valid). At that point the application can start all over, perhaps with a pessimistic lock (also implemented in StampedLock.) Managing that is up to you, and one stamp cannot be used to unlock another – so be super careful.

Let’s see this lock in action-

long stamp = lock.tryOptimisticRead(); // non blocking path - super fast
work(); // we're hoping no writing will go on in the meanwhile
if (lock.validate(stamp)){
       //success! no contention with a writer thread 
}
else {
       //another thread must have acquired a write lock in the meanwhile, changing the stamp. 
       //bummer - let's downgrade to a heavier read lock

            stamp = lock.readLock(); //this is a traditional blocking read lock 
       try {
                 //no writing happening now
                 work();

       }
       finally {
            lock.unlock(stamp); // release using the correlating stamp
       }
}

2. Concurrent Adders

Another beautiful addition to Java 8, meant specifically for code running at scale, is the concurrent “Adders”. One of the most basic concurrency patterns is reading and writing the value of a numeric counter. As such, there are many ways in which you can do this today, but none so efficient or elegant as what Java 8 has to offer.

Up until now this was done using Atomics, which used a direct CPU compare and swap (CAS) instruction (via the sun.misc.Unsafe class) to try and set the value of a counter. The problem was that when a CAS failed due to contention, the AtomicInteger would spin, continually retrying the CAS in an infinite loop until it succeeded. At high levels of contention this could prove to be pretty slow.

Enter Java 8’s LongAdders. This set of classes provides a convenient way to concurrently read and write numeric values at scale. Usage is super simple. Just instantiate a new LongAdder and use its add() and intValue() methods to increase and sample the counter.

The difference between this and the old Atomics is that here, when a CAS fails due to contention, instead of spinning the CPU, the Adder will store the delta in an internal cell object allocated for that thread. It will then add this value along with any other pending cells to the result of intValue(). This reduces the need to go back and CAS or block other threads.

If you’re asking yourself when should I prefer to use concurrent Adders over Atomics to manage counters? The simple answer is – always.

3. Parallel Sorting

Just as concurrent Adders speed up counting, Java 8 delivers a concise way to speed up sorting. The recipe is pretty simple. Instead of -

Array.sort(myArray);

You can now use –
Arrays.parallelSort(myArray);

This will automatically break up the target collection into several parts, which will be sorted independently across a number of cores and then grouped back together. The only caveat here is that when called in highly multi-threaded environments, such as a busy web container, the benefits of this approach will begin to diminish (by more than 90%) due to the cost of increased CPU context switches.

4. Switching to the new Date API

Java 8 introduces a complete new date-time API. You kind of know it’s about time when most of the methods of the current one are marked as deprecated... The new API brings ease-of-use and accuracy long provided by the popular Joda time API into the core Java library.

As with any new API the good news is that it’s more elegant and functional. Unfortunately there are still vast amounts of code out there using the old API, and that won’t change any time soon.

To help bridge the gap between the old and new API’s, the venerable Date class now has a new method called toInstant() which converts the Date into the new representation. This can be especially effective in those cases where you're working on an API that expects the classic form, but would like to enjoy everything the new API has to offer.



5. Controlling OS Processes

Launching an OS process from within your code is right there with JNI calls – it’s something you do half-knowing there’s a good chance you’re going to get some unexpected results and some really bad exceptions down the line.

Even so, it’s a necessary evil. But processes have another nasty angle to them - they have a tendency to dangle. The problem with launching process from within Java code so far has been that is was hard to control a process once it was launched.

To help us with this Java 8 introduces three new methods in the Process class -

destroyForcibly - terminates a process with a much higher degree of success than before.
isAlive tells if a process launched by your code is still alive.
A new overload for waitFor() lets you specify the amount of time you want to wait for the process to finish. This returns whether the process exited successfully or timed-out in which case you might terminate it.

Two good use-cases for these new methods are -

If the process did not finish in time, terminate and move forward:
if (process.wait(MY_TIMEOUT, TimeUnit.MILLISECONDS)){
       //success! }
else {
    process.destroyForcibly();
}
Make sure that before your code is done, you're not leaving any processes behind. Dangling processes can slowly but surely deplete your OS.
for (Process p : processes) {
       if (p.isAlive()) {
             p.destroyForcibly();
       }
}

6. Exact Numeric Operations

Numeric overflows can cause some of the nastiest bugs due to their implicit nature. This is especially true in systems where int values (such as counters) grow over time. In those cases things that work well in staging, and even during long periods in production, can start breaking in the weirdest of ways, when operations begin to overflow and produce completely unexpected values.

To help with this Java 8 has added several new “exact” methods to the Math class geared towards protecting sensitive code from implicit overflows, by throwing an unchecked ArithmeticException when the value of an operation overflows its precision.

int safeC = Math.multiplyExact(bigA, bigB); // will throw ArithmeticException if result exceeds +-2^31
The only downside is that it’s up to you to find those places in your code where overflows can happen. Not an automagical solution by any stretch, but I guess it’s better than nothing.

7. Secure Random Generation

Java has been under fire for several years for having security holes. Justified or not, a lot of workhas been done to fortify the JVM and frameworks from possible attacks. Random numbers with a low-level of entropy make systems that use random number generators to create encryption keys or hash sensitive information more susceptible to hacking.

So far selection of the Random Number Generation algorithms has been left to the developer. The problem is that where implementations depend on specific hardware / OS / JVM, the desired algorithm may not be available. In such cases applications have a tendency to default to weaker generators, which can put them at greater risk of attack.

Java 8 has added a new method called SecureRandom.getInstanceStrong() whose aim is to have the JVM choose a secure provider for you. If you’re writing code without complete control of the OS / hardware / JVM on which it would run (which is very common when deploying to the cloud or PaaS), my suggestion is to give this approach some serious consideration.

8. Optional References

NulPointers are like stubbing your toes - you’ve been doing it since you could stand up, and no matter how smart you are today - chances are you still do. To help with this age-old problem Java 8 is introducing a new template called Optional<T>.

Borrowing from Scala and Haskell, this template is meant to explicitly state when a reference passed to or returned by a function can be null. This is meant to reduce the guessing game of whether a reference can be null, through over-reliance on documentation which may be out-of-date, or reading code which may change over time.

Optional<User> tryFindUser(int userID) {
or -

void processUser(User user, Optional<Cart> shoppingCart) {
The Optional template has a set of functions that make sampling it more convenient, such as isPresent() to check if an non-null value is available, or ifPresent() to which you can pass a Lambda function that will be executed if isPresent is true. The downside is that much like with Java 8’s new date-time APIs, it will take time and work till this pattern takes hold and is absorbed into the libraries we use and design everyday.

New Lambda syntax for printing an optional value -

value.ifPresent(System.out::print);

What is Cache Memory?

In this post you will learn about Cache memory and their types and advantages. Cache Memory is a special very high-speed memory. It is used to speedup and synchronizing with high-speed CPU. Cache memory is costlier than main memory or disk memory but economical than CPU registers. Cache memory is an extremely fast memory type that acts as a buffer between RAM and the CPU. It holds frequently requested data and instructions so that they are immediately available to the CPU when needed.

Cache memory is used to reduce the average time to access data from the Main memory. The cache is a smaller and faster memory which stores copies of the data from frequently used main memory locations. There are various different independent caches in a CPU, which stored instruction and data.

Cache memory lies between CPU and Main Memory.



It is also called CPU memory, that a computer microprocessor can access more quickly than it can access regular RAM. This memory is typically integrated directly with the CPU chip or placed on a separate chip that has a separate bus interconnect with the CPU.

Cache memory saves time and increases efficiency because the most recently processing data is stored in it which makes the fetching easier.

 Step 1 or Register –
It is a type of memory in which data is stored and accepted that are immediately stored in CPU. Most commonly used register is accumulator, Program counter, address register etc.

Step 2or Cache memory –

It is the fastest memory which has faster access time where data is temporarily stored for faster access.
Step 3 or Main Memory –

It is memory on which computer works currently it is small in size and once power is off data no longer stays in this memory

Step 4 or Secondary Memory –
It is external memory which is not fast as main memory but data stays permanently in this memory

Functions of Cache Memory:

The basic purpose of cache memory is to store program instructions that are frequently re-referenced by software during operation. Fast access to these instructions increases the overall speed of the software program.

The main function of cache memory is to speed up the working mechanism of computer.


Application of Cache Memory –

Usually, the cache memory can store a reasonable number of blocks at any given time, but this number is small compared to the total number of blocks in the main memory.
The correspondence between the main memory blocks and those in the cache is specified by a mapping function.


Types of Cache –

Primary Cache –
A primary cache is always located on the processor chip. This cache is small and its access time is comparable to that of processor registers.
Secondary Cache –
Secondary cache is placed between the primary cache and the rest of the memory. It is referred to as the level 2 (L2) cache. Often, the Level 2 cache is also housed on the processor chip.


What is Thread Safety in Java?

In this post you will understand Thread Safety in Java. Java provide multi-threaded environment support using Java Threads, we know that multiple threads created from same Object share object variables and this can lead to data inconsistency when the threads are used to read and update the shared data.
Here is an example of non thread-safe code, look at the code and find out why this code is not thread safe ?

/*

* Non Thread-Safe Class in Java

*/

public class Counter {

private int count;

/*

* This method is not thread-safe because ++ is not an atomic operation

*/

public int getCount(){

return count++;

}

}

Above example is not thread-safe because ++ (increment operator) is not an atomic operation and can be broken down into read, update and write operation. if multiple thread call getCount() approximately same time each of these three operation may coincide or overlap with each other for example while thread 1 is updating value , thread 2 reads and still gets old value, which eventually let thread 2 override thread 1 increment and one count is lost because multiple thread called it concurrently.



Thread safety in java is the process to make our program safe to use in multithreaded environment, there are different ways through which we can make our program thread safe.

Synchronization is the easiest and most widely used tool for thread safety in java.
Use of Atomic Wrapper classes from java.util.concurrent.atomic package. For example AtomicInteger
Use of locks from java.util.concurrent.locks package.
Using thread safe collection classes, check this post for usage of ConcurrentHashMap for thread safety.
Using volatile keyword with variables to make every thread read the data from memory, not read from thread cache.
Thread safety is the ability to have multiple threads execute a method or a block of code without causing any side effect to shared resources / objects.

In java you can mark a block of code or method with synchronized keyword in order make that a thread safe which makes sure only one thread executes this block at a time. Basically, it locks the object or class when a thread is executing the synchronized block, So other threads will have to wait in queue for first thread to finish in order to get access to the locked object.

The packages java.util.concurrent and java.util.concurrent.atomic has been introduced in Java 5 with multiple utilities and wrappers in order to make thread safe programming easy.

There won’t be thread safety issues if there are no shared resources. Lets say your class doesn’t have any instance variables, its automatically thread safe. Final variables, Immutable Objects (popular one is String class), variables defined inside the method, Wrappers defined in concurrent.atomic package are all thread safe.


Friday, December 7, 2018

What is Google Fi ?

In this post you will learn about Google Fi and why should we use Google Fi. Basically, it is a mobile virtual network operator (MVNO) by Google. It provides mobile data service on three mobile networks (T-Mobile, Sprint and U.S. Cellular). Initially, it was launched as “Project Fi” compatible with Nexus and Pixel smartphones (plus the Motorola Moto X4). But after three years Google has officially changed its name from “Project Fi” to “Google Fi”. Now it is compatible with the majority of Android devices and, for the first time ever, it also works with iPhones.

Benefits of Google Fi:

It automatically switches between networks depending on signal strength and speed.
Its service covers more than 170 countries and territories.
With Google Fi, you can use your data in 170-plus countries, without having to pay any roaming fees.
It offers unlimited international texting.


Phone calls in 135 countries will cost you 20 cents a minute over cellular connections and rates vary for Wi-Fi calls.
With Google Fi, you won’t have to pay for any unused data. It credits you approximately 1 cent for each MB of your remaining data.
Google Fi offers the special feature as you can your phones as a wireless hotspot with no additional cost.
Recently, Google implemented Enhanced Network Beta which means that your data is private and cannot be viewed by anyone.
You can get referral credit of $100, only if you convince others to join Google's wireless network and they stay active for 30 days.

Difference Between SVN and Git

In this post you will learn about SVN and Git and difference between them. Apache Subversion (often abbreviated SVN, after its command name svn) is a software versioning and revision control system distributed as open source under the Apache License. Software developers use Subversion to maintain current and historical versions of files such as source code, web pages, and documentation whereas Git  is a version-control system for tracking changes in computer files and coordinating work on those files among multiple people. It is primarily used for source-code management in software development, but it can be used to keep track of changes in any set of files. As a distributed revision-control system, it is aimed at speed, data integrity, and support for distributed, non-linear workflows.

Now let us see the difference between them. SVN is a Centralized Version Control System (CVCS), and Git is a Distributed Version Control System (DVCS).

A centralized version control system operates on the basic idea that there is one single copy of the project that developers will commit changes to, and where all version of the project are stored.

A distributed version control system, however, works on the principle that each developer “clolnes” the project repository to their hard drive. A copy of the project is stored on every developer’s local machine, and changes are either “pushed” up to the online repository, or “pulled” down from the repo to update the version that the developer has on their machine.



Git can be used to create a workflow that’s nearly identical to Subversion (SVN), except that it requires an extra step. Since the commits are made on your local copy of the repository, you need an extra command (push) to share these changes to the central shared repository. Since Git also allows you to control what you commit through staging changes to the index, there’s also a step there, though you can bypass that step with flags to the commit command.

If you plan to use Git in this way, SVN and Git are about equal. SVN is simpler, with fewer steps. Modern versions of SVN have made branching and resolution of branch merges work better, so there’s little to no advantage for Git in a centralized workflow. The only advantage Git has here is that you can continue making commits without access to the central repository, and merge those changes with the upstream changes later. The downside is that to maintain a clean history, you have to know when to do a rebase instead of a merge. This isn’t hard to learn, but it’s more complex than SVN.

However, Git makes it easy to use other workflows like Feature Branching, Gitflow, and Forking (aka GitHub) workflow. Git offers a lot more flexibility in how you can collaborate, which is a big reason for using revision control in the first place. It makes forking a project a trivial 1-step action — every clone of a repository is essentially already a fork — which is one of the reasons it has become popular within the open source community. This flexibility and power means that the primary reason to choose SVN these days is that it’s what’s already in use. And since you can actually use Git concurrently with SVN, for contributors to a repository, even that’s not always a good reason.

Git’s graph-based revision model makes it more powerful for branching, merging, and resolving conflicts. SVN has made significant improvement in this area in the last few years, but pressure from Git was a major reason for these improvements. Managing a Git repository is also much easier, and requires a lot less administrative overhead (granting special permissions, etc.). Git also gives all users a lot more tools to examine and work with repositories.


Basic UNIX Commands With Examples

In this post you will learn Basic UNIX Commands for every software Developer. Unix is a common Operating System. UNIX is used by the workstations and multi-user servers within the school.
On X terminals and the workstations, X Windows provide a graphical interface between the user and
UNIX. However, knowledge of UNIX is required for operations which aren't covered by a graphical
program, or for when there is no X windows system, for example, in a telnet session.

The UNIX operating system is made up of three parts; the kernel, the shell and the programs.

kernel
The kernel of UNIX is the hub of the operating system: it allocates time and memory to programs
and handles the filestore and communications in response to system calls.

shell
The shell acts as an interface between the user and the kernel. When a user logs in, the login
program checks the username and password, and then starts another program called the shell.

The following are the Basic commands:

ls (list) :

When you first login, your current working directory is your home directory. Your home directory has the same name as your user-name, for example, ee91ab, and it is where your personal files and subdirectories are saved.
To find out what is in your home directory, type

% ls (short for list)

The ls command lists the contents of your current working directory.

To list all files in your home directory including those whose names begin
with a dot, type

% ls -a

ls is an example of a command which can take options: -a is an example of an option. The options change the behaviour of the command. There are online manual pages that tell you which options a particular command can take, and how each option modifies the behaviour of the command.

mkdir (make directory):

We will now make a subdirectory in your home directory to hold the files you
will be creating and using in the course of this tutorial. To make a
subdirectory called unixpoint in your current working directory type

% mkdir unixpoint

cd (change directory)

The command cd directory means change the current working directory to 'directory'. The current working directory may be thought of as the directory you are in, i.e. your current position in the file-system tree.

To change to the directory you have just made, type

% cd unixpoint

pwd (print working directory)

Pathnames enable you to work out where you are in relation to the whole file-system. For example, to find out the absolute pathname of your homedirectory,

type cd to get back to your home-directory and then type 

% pwd

cp (copy)

cp file1 file2 is the command which makes a copy of file1 in the current working directory and calls it file2 What we are going to do now, is to take a file stored in an open access area of the file system, and use the cp command to copy it to your unixpoint
directory.

mv (move)

mv file1 file2 moves (or renames) file1 to file2

To move a file from one place to another, use the mv command. This has the effect of moving rather than copying the file, so you end up with only one file rather than two.It can also be used to rename a file, by moving the file to the same directory, but giving it a different name.




rm (remove), rmdir (remove directory)

To delete (remove) a file, use the rm command. As an example, we are going to create a copy of the Tech.txt file then delete it.

Inside your unixpoint directory, type

% cp Tech.txt tempfile.txt
% ls (to check if it has created the file)
% rm tempfile.txt

% ls (to check if it has deleted the file)
You can use the rmdir command to remove a directory (make sure it is empty first). Try to remove the backups directory. You will not be able to since UNIX will not let you remove a non-empty directory.

clear (clear screen)
Before you start the next section, you may like to clear the terminal window of the previous commands so the output of the following commands can be clearly understood.

At the prompt, type
% clear
This will clear all text and leave you with the % prompt at the top of the window.

cat (concatenate)
The command cat can be used to display the contents of a file on the screen. Type:

% cat Tech.txt

tail

The tail command writes the last ten lines of a file to the screen. Clear the screen and type

% tail Tech.txt


grep:

grep is one of many standard UNIX utilities. It searches files for specified words or patterns.

chmod:

chmod is the command and system call which may change the access permissions to file system objects (files and directories).

cal:

It is used to display the calendar.

Date:

It is used to display the date.

Who:

This command is used to display the list of users currently logged in.











High Paying Jobs after Learning Python

Everyone knows Python is one of the most demand Programming Language. It is a computer programming language to build web applications and sc...