Preis: Free for subscribers
Erhältlich ab: August 2018
If Java is unfamiliar territory for you or you are migrating from another language to Java, this article should help you understand what you need and what you don't.
I suppose most of you, in the role of explorers and adventure seekers, have come across the following problem – you’ve set a target and you are willing to try out a new technology, in an unknown field, to reach “unexplored” grounds, with the idea of expanding your horizons.
The problem is that, even before you start, an important question arises:
Quite simple. You need a compass.
But first, a few words about me. My name is Samuil and I am a Java beginner. I’m currently working at Dreamix, a custom software company, where we develop software using Java with Spring Boot and Angular. Prior to starting this job, I went through intensive training at a popular software academy, which I am extremely grateful to for the serious preparation. I became part of the Dreamix team three months ago when I started using Java in the real world. I would like to share with you my personal experience regarding the process of adaptation, what to do and how to accomplish your goals in this direction. I believe the information will be useful to you!
Nowadays, finding information seems very easy. I believe most of you have already searched for something along the lines of: ‘How to teach yourself to code’, ‘Learn to code’, ‘How to get started writing code’ and so on. In front of you appear thousands of articles, containing all kinds of suggestions and tips. I went through the same thing and I know how confusing it can be.
Even though much of the information is useful, you can easily get lost in the labyrinth of data.
I am currently finishing my Masters in ‘Electrical Engineering – Informatics and Communication’ in French. Nearly a year and a half ago, while longing for practical experience, and realizing that university in and of itself could not provide it, I decided to try something new. As an experiment, I started attending, as already mentioned, a software academy, and it completely changed my understanding of programming. There, they prepared me for real life, where the work contains an array of problems and requirements.
What I enjoyed most of my time at the academy is the atmosphere. Young, talented and motivated people, who are not in a hurry to leave the lectures, do not run away from innovation, keep pace with new technologies. People who want to share their experience with anyone who has the thirst to learn. Being a part of this is surely going to motivate you.
The third very valuable quality of this type of training is that you are immersed in an environment of people of all ages, occupations, and nationalities, attending university, working, taking care of their families, all willing to dedicate time to attending lectures, studying and expanding their technical horizons. Time does not matter to the lecturers, the only thing they care about is whether the shared knowledge has reached you and has been understood. For them, the emerging questions from students are joy and not a nuisance.
In my view, it is useful at first to be surrounded by people with more knowledge and experience, to guide you through the sea of information, in a way that only the most important and meaningful things reach you. Then, of course, in your quest to get deeper into things, you will search and build up your knowledge.
To me, finding yourself in this environment, where you are thrown directly into the deep and everyone around you goes forward with fast pace and high goals, where in order to endure the intense tempo and constant inflow of new information you need devotion, perseverance and above all desire to keep up, means that if you manage to do so, you can not fail to achieve a good result. This is also a great way to find out if programming is for you, whether you’re ready to do it eight hours a day, five days a week, despite all the difficulties you will encounter.
The academy is … how to put it, a test projection of reality. My advice is simple. If you can locate a software academy where you live and if you have the opportunity to sign up, do not be afraid to do so. On the contrary, dive right in!
а) Java: A Beginner’s Guide, Seventh Edition 7th Edition – it starts with the basics – how to create, compile, and run a Java program, then moves on to the keywords, syntax, and constructs that form the core of the Java language. This book spans to some of Java’s more advanced features like multithreaded programming, generics, lambda expressions, Swing, and JavaFX. In it, you can also find feature details on Java SE 9’s innovative module system, and an introduction to JShell, Java’s new interactive programming tool.
b) Head First Java, 2nd Edition – offers a highly interactive learning experience that lets new programmers pick up the fundamentals of the Java language quickly. The book is distinguishable from the other books of this type by the mind-stretching exercises, memorable analogies, humorous pictures, and casual language.
c) Java Precisely, Third Edition – offers many examples and some essential libraries. It is both for beginners and more experienced Java programmers.
d) Udemy courses:
e) “Stack Overflow” – every programmer’s favorite website – also the largest, most trusted online community for developers to learn, share their programming knowledge, and build their careers.
f) “Codingame” – Coding games are similar to online tutorials. In fact, the two are best used together for a mix of theory, practice and fun.
For people who feel comfortable with the basic principles and concepts of programming and would like to expand their knowledge, here are some additional resources that helped me.
Currently, one of my goals is to receive the Oracle Certified Associate Java (OCA) certificate. My trusted sidekick in this initiative is the OCA: Oracle
Certified Associate Java SE 8 Programmer I Study Guide: Exam 1Z0-808 1st Edition. When I first opened it, I noticed that it is written in more detail and is very comprehensible, covering some of the most basic and common features of the language, and other, less rare, but tremendously valuable to those who want to improve the quality of their code. At the same time, I also decided to turn to resources that would improve my thinking and would give me different ways to solve a problem. Effective Java (3rd Edition) is the resource for this.
The book is a bit difficult to read and it takes time. I read it slowly, continuing with a new dose of information, only when I am sure I fully understand the previous paragraph. You will encounter a number of ‘non-standard’ techniques that are beyond the scope of the usual programmer, but once you understand the idea behind them, you’ll want to put them to use as soon as possible.
What I like about this book is the fact that everything is thoroughly explained. It makes you ask yourself why you would prefer to make one decision over another when encountering a problem. What the advantages and disadvantages of your choice are, and what other options you have. It gives you great tips and makes you think, it encourages you to add new knowledge to your existing “toolkit” and before you know it, you start using it.
Java: The Complete Reference, Ninth Edition. This Oracle Press resource explains in detail how to develop, compile, debug, and run Java programs. JavaBeans, servlets, applets, and Swing are examined and real-world examples demonstrate Java in action. The new Java SE 8 features such as lambda expressions, the stream library, and the default interface method are also discussed in detail. In addition, it offers a solid introduction to JavaFX.
Core Java Volume I–Fundamentals (11th Edition) explains the most important language and library features and shows how to build real-world applications with thoroughly tested examples. The new features introduced with Java SE 9 are all covered with depth and completion. This book encourages readers to dive deeper into the most critical features of the language and core libraries.
Core Java Volume II– Advanced Features (11th Edition). As you can understand from the name of the book, it takes the reader to the next level by covering advanced user-interface programming and the enterprise features of the Java SE 9 platform.
I hope that as a Java beginner and a person who is relatively new to programming, I’ve been able to help you with an idea, or by providing a good reading list.
In the end, you know best what would’s best for you and your goals. I wish you nothing but success in this adventure and advise you not to be afraid to dive deep into the Java world!
asap
Are you ready for JAX London this October? We talked to two speakers from this year’s program to find out what they think about Java’s new release cadence, what's still missing, their favorite developer tools, and more.
Peter Lawrey: I feel a 6-month release cycle is a significant improvement. I suspect most people find it confusing and it puts them off upgrading. However, in time we will see greater adoption of these releases between Long Term Service releases.
The advantage of this approach is that releases are by date instead of a feature set. Anything not ready by that date doesn’t make the release. The downside is that you can’t plan for a particular feature to be available in a specific release (unless it is already available)
Simon Ritter: I think the idea of a faster release cadence is, in principle, a good idea. Often developers have been frustrated by the slow progress of Java (both the language syntax and the core libraries) due to the long time between releases (since JDK 5 this has ranged from just over two years to just over four and a half). A faster, predictable release schedule means access to new features more quickly. The downside of this is how users approach deployment. Do they switch JDK every six months, every year or just for what Oracle are classifying as Long Term Support releases (but only if you pay)?
Whilst JavaFX is popular with many people, it never really gained critical mass as a core part of the JDK.
Peter Lawrey: We have tested with Java 9 and 10 and found some issues which have been fixed in the Java 11 early access. Most likely we will not migrate before Java 11.
Simon Ritter: I have, but that’s not a significant data point! All my code is used for demos to show people how to use new features (like local variable type inference) so I always make sure I’m using the latest version. For most of my personal projects, I’m still running JDK 8.
Peter Lawrey: Java 10 doesn’t have many new features, though that is to be expected coming just 6 months after Java 9. Only time will tell how useful the use of var will be in improving readability.
Simon Ritter: Tough question. I’ll be contentious and say I’d rather have value types, which will provide a lot of performance and clearer code benefits than local variable type inference. Adding var to Java is never something I’ve felt myself needing in the past.
Peter Lawrey: Java 9 to 11 feels like small but significant housekeeping for how the JVM works. I believe this had to happen, but for me they don’t have any really compelling features. Most likely, developers will see Java 11 as a better, cleaner implementation of the features they use in Java 8.
Simon Ritter: No. The JDK 11 release contents are pretty much locked down now as it just entered the ramp-down phase. If you look at the list of JDK Enhancement Proposals (JEPs) that will be included there is only one for the language, which is an extension of local variable syntax to Lambda expressions. Considering Lambdas already have type inference, the only use of this is if you wanted to include annotations on Lambda parameters. Most of the changes to the core libraries are the removal of the java.se.ee aggregator module and its components (like java.corba, java.bind and java.xml.ws). Hardly compelling reasons to migrate.
Peter Lawrey: Decoupling libraries which have specific use cases is a good thing. Java FX is one of these.
Simon Ritter: On the one hand, I think it’s a shame. JavaFX Is an excellent library for developing rich client applications that aren’t suited to the typical web/HTML5/CSS/Javascript approach. On the other, the reality is that, whilst it is popular with many people, it never really gained critical mass as a core part of the JDK. Thankfully, it is a separate open source project and people like Gluon are doing a lot of work to ensure that it is still easy to use JavaFX with JDK 11.
Peter Lawrey: I feel modular Java is a significant improvement for the JVM. However, I feel it will take some time before it is widely used in other libraries. When most developers are on Java 11+, Jigsaw could be the standard way to package libraries.
Simon Ritter: We had to have modular Java. If you go back to JDK 1.0, there were only 211 classes in the core libraries. In JDK 8, there are over 4,500. We needed to organize this rich set of functionality in a more logical and configurable way. For me, it’s not a question of Jigsaw or Maven (maybe Jigsaw or OSGi). The developers of the Java Platform Module System did a great job; however, I would have preferred to see them focus just on modularizing the JDK and letting people use other frameworks on top of that (like OSGi) to modularize applications.
When most developers are on Java 11+, Jigsaw could be the standard way to package libraries.
Peter Lawrey: We need more rounds of Project Coin style improvements. There is still too many gotchas and quirks for developers which have been addressed in other languages like C# or kotlin.
Simon Ritter: From my perspective, nothing is missing. There’s always more that can be added (like value types). I’d like to see some of the bigger OpenJDK projects deliver; Amber for less boilerplate code, Valhalla for value types and Loom for fibres.
Peter Lawrey: It’s simplicity. It takes less time to feel you have mastered all its features. Less features means less corner cases. A large market place means developers can more easily find work, and organizations can more easily find the developers they need.
Simon Ritter: James Gosling famously described Java as a ‘Blue collar programming language’, and that is spot on. It’s a platform that was designed to get the job done. There are many other languages with different benefits but Java seems to provide a lot of what developers are looking for. The other really key benefit for Java is the JVM. Even setting platform neutrality aside the performance and reliability benefits of a managed runtime are what gives Java much of its strength and appeal.
Peter Lawrey: I am a fan of IDEs in general and IntelliJ in particular. Many developers still use text editors which are easier to get started but not as efficient or effective.
Simon Ritter: I’m really in the minority, but I particularly like NetBeans as an IDE. IntelliJ seems to have taken over as the most popular from Eclipse, but I still find NetBeans has everything I need to develop code quickly and easily. I’m hoping that with Oracle contributing it to the Apache Foundation that it will continue to be developed and keep up with the faster release cadence of the JDK.
asap
Simon Ritter will deliver one talk at JAX London 2018, in which he takes a look at how the Java platform is evolving with the introduction of big features like the Java Platform Module System (JPMS) in JDK 9, local variable type inference in JDK 10 and dynamic class file constants in JDK 11.
Peter Lawrey will deliver one talk at JAX London 2018, where you will learn how to design a high-performance blockchain in Java.
run fast for your mother
We live in interesting times for Java. As a platform, Java has changed more in the last year than it had in the preceding five. In this article, JAX London speaker Simon Ritter will go over the current state and future directions for the Java platform.
Last year, JDK 9 was finally released after a series of delays. This release included a somewhat contentious feature, the Java Platform Module System (JPMS). I’ve given a lot of presentations both to customers and at conferences since then and it’s given me a chance to poll a broad cross-section of Java developers about their Java use. What’s become clear is that adoption of JDK 9 and even JDK 10 is almost non-existent. At the moment, the vast majority of developers are quite happy using JDK 8. This version is stable, secure, and includes functional programming elements not seen before in Java.
There are two reasons that a new version of Java is not proving a hit, even though would typically attract a number of early adopters.
First, there is a major change in strategy for the development of the JDK. In the past, developers have been reasonably confident that code developed on one version of Java will run on later versions without change. In reality, this has not always been the case. JDK 1.4 added the assert keyword and JDK 5 added enum, which meant you couldn’t use these as variable names anymore. Neither of these changes broke much code and the changes required to get your application working again were trivial. There are a couple of other things that have changed along the way, but I’ve taken code compiled on JDK 1.0 and run it without change on JDK 8. There really aren’t many development platforms that can say that.
One of the key reasons for this level of backward compatibility has been that new APIs and features have been added to Java, but nothing has been removed. This is in spite of the nearly 500 API elements having been deprecated since JDK 1.1. Whilst this is good for moving applications to newer versions of the platform without change, the technical debt of the JDK itself has become unmanageable. The decision was made that it was time to start removing antiquated features with the JDK now using JPMS as well as adding new ones.
The basic idea of JPMS is to divide the monolithic core libraries (which have grown from roughly 200 classes in JDK 1.0 to 4,500 in JDK 8) into more manageable pieces. By doing this, it is possible to create Java runtimes tailored to a specific application, only including the modules required by the application. Given the popularity of developing and deploying applications as a set of microservices, this is a very sensible idea.
JPMS also includes a second major change, which is the encapsulation of all internal JDK APIs. These are APIs not intended for general use; there is no public documentation for these APIs and developers have always been explicitly warned not to use them. They have also been warned that these APIs may be removed without notice.
The reality is that, whilst most Java application developers don’t use these APIs, those who develop frameworks and libraries often do. This led to a lot of potential problems breaking many applications. The JDK developers reversed their stance on this and introduced a set of command line switches that enable applications to override the encapsulation.
The second and more substantial reason for the lack of adoption of either JDK 9 or JDK 10 is the significantly faster release cadence of the JDK. Rather than planning a set of features that had to be completed before a JDK could be released we now have a time-driven model of a new release every six months.
Clearly, it is not feasible for Oracle to support all of these releases as their number will proliferate over time. The decision was made to use a Long Term Support (LTS) model, that is similar to the one used by people like the Ubuntu community. A new LTS release will occur every three years. To get things started, JDK 8 was classified an LTS, and the next one will be JDK 11. When looking at migrating applications to a newer JDK, many developers have decided to wait for JDK 11.
Unfortunately, the logic behind this decision might seem sound but is missing some subtle announcements from Oracle regarding the availability of updates to the JDK.
In the past, we were used to the current JDK having public updates and a period of overlap for these updates, even after the next release came out. This has varied from a little over a year to nearly three years, allowing users to migrate between JDK revisions at a time that is most convenient for them. Testing and stability can be assured before the move is made.
This will no longer be the case. Starting with JDK 9, public updates are only available until the next release of the JDK, i.e. six months. Oracle has also switched to providing two binaries of the JDK: the traditional Oracle JDK under the Oracle Binary Code License and a new OpenJDK-based binary under a GPLv2 with classpath exception license. As of JDK 11, the Oracle binary that most people have been using for deployments are only available for development and testing. If you want to use the Oracle binary for a commercial application, you will need a support contract from Oracle with its associated costs. Azul Systems provides an alternative build of OpenJDK called Zulu. This is a direct replacement that requires no coding or configuration changes to use. Community builds of Zulu are free to download and use, and there is a very reasonably priced commercial option (Zulu Enterprise) with timely security updates and bug fixes.
There are many great ideas for the future directions of Java. OpenJDK projects like Valhalla for value types, Project Amber for simplified syntax and Project Loom for massive multithreading are just three. Clearly, the Java platform is not standing still and developers will want to continue to update to newer and better versions of the JDK. How quickly and easily that happens remains to be seen.
Simon Ritter will be delivering a talk at JAX London 2018 on Tuesday, October 9 that goes over how the Java platform is evolving with the introduction of big features like the Java Platform Module System (JPMS) in JDK 9, local variable type inference in JDK 10 and dynamic class file constants in JDK 11.
love me like you do
Useful for optimizing memory consumption, a heap dump is a snapshot of the memory of a Java process. In this article, Ram Lakshmanan explores seven different options to capture heap dumps.
Java heap dumps are vital artifacts to diagnose memory-related problems such as slow memory leaks, Garbage Collection problems, and java.lang.OutOfMemoryError. They are also vital artifacts to optimize the memory consumption.
There are great tools like Eclipse MAT and Heap Hero to analyze heap dumps. However, you need to provide these tools with heap dumps captured in the correct format and correct point in time.
This article gives you multiple options to capture heap dumps. However, in my opinion, the first three are effective options to use and others are good options to be aware.
jmap print heap dumps into specified file location. This tool is packaged within JDK. It can be found in <JAVA_HOME>\bin folder.
Here is how you should invoke jmap:
jmap -dump:live,file=<file-path> <pid>
where
pid: is the Java Process Id, whose heap dump should be captured
file-path: is the file path where heap dump will be written in to.
Example:
jmap -dump:live,file=/opt/tmp/heapdump.bin 37320
Note: It’s quite important to pass “live” option. If this option is passed, then only live objects in the memory are written into the heap dump file. If this option is not passed, all the objects, even the ones which are ready to be garbage collected are printed in the heap dump file. It will increase the heap dump file size significantly. It will also make the analysis tedious. To troubleshoot memory problems or optimize memory, just “live” option should suffice the need.
When an application experiences java.lang.OutOfMemoryError, it’s ideal to capture heap dump right at that point. This helps you diagnose the problem because you want to know what objects were sitting in memory and what percentage of memory they were occupying when java.lang.OutOfMemoryError occurred.
However, due to the heat of the moment, most times, IT/Operations team forgets to capture heap dump. Not only that, they also restart the application. It’s extremely hard to diagnose any memory problems without capturing heap dumps at right time.
That’s where this option comes very handy. When you pass -XX:+HeapDumpOnOutOfMemoryError
system property during application startup, JVM will capture heap dumps right at the point when JVM experiences OutOfMemoryError.
Sample Usage:
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt/tmp/heapdump.bin
Note: Captured heap dump will be printed at the location specified by ‘-XX:HeapDumpPath’ system property.
Best Practice: Keep this property configured in all the applications at all the times, as you never know when OutOfMemoryError will happen.jcmd3.
jcmd tool is used to send diagnostic command requests to the JVM. It’s packaged as part of JDK. It can be found in \bin folder.
Here is how you should invoke jcmd:
jcmd <pid> GC.heap_dump <file-path>
where
pid: is the Java Process Id, whose heap dump should be captured
file-path: is the file path where heap dump will be written in to.
Example:
jcmd 37320 GC.heap_dump /opt/tmp/heapdump.bin
JVisualVM is a monitoring, troubleshooting tool that is packaged within the JDK. When you launch this tool, you can see all the Java processes that are running on the local machine. You can also connect to java process running on remote machine using this tool.
Steps:
Fig: Capturing Heap Dump from JVisualVM
There is a com.sun.management:type=HotSpotDiagnostic MBean. This MBean has ‘dumpHeap’ operation. Invoking this operation will capture the heap dump. ‘dumpHeap’ operation takes two input parameters:
You can use JMX clients such as JConsole, jmxsh, or Java Mission Control to invoke this MBean operation.
Fig: Using Java Mission Control as the JMX client to generate heap dump
Instead of using tools, you can also programmatically capture heap dumps from the application. There might be cases where you want to capture heap dumps based on certain events in the application. Here is a good article from Oracle which gives the source code for capturing heap dumps from the application by invoking the com.sun.management:type=HotSpotDiagnostic MBean JMX Bean, that we discussed in the above approach.
If your application is running on IBM Websphere Application Server, you can use the administrative console to generate heaps.
Steps:
You can also use wsadmin to generate heap dumps.
How do you manage your Java object memory? In this article, Juraj Papp will walk you through Java structs, the memory layout of Java objects, and how to handle them like a pro.
If you are familiar with Java objects, you may already know that while they provide automatic memory management there is a cost associated with it.
In this article, we are going to take a look at the memory layout of Java objects, find out what the overhead is and show what we can do about it.
When you are first introduced to Java, you find out that each Java primitive type has its associated boxed version and that there is a difference between the two. To illustrate, consider the following arrays:
float[] arrayPrim = new float[1000];
Float[] arrayBoxed = new Float[1000];
Both arrays are capable of storing one thousand floats. Let’s calculate how much memory they use. We know that float uses four bytes. Thus, one thousand floats use four-thousand bytes. Since ‘arrayPrim’ is an object and an array we add 16 bytes for the array object header, resulting in 4016 bytes for ‘arrayPrim’. A boxed version Float is an object, thus we add 12 bytes for object header. A Float then takes 16 bytes. One thousand of these add up to 16000 bytes. The array ‘arrayBoxed’ is an array of references, with four bytes per reference, and one thousand references add up to 4016 bytes along with the array object header. This brings the memory usage of ‘arrayBoxed’ to 20016 bytes.
SEE ALSO: 7 ways to capture Java heap dumps
For our calculation, we have used 4 bytes for objects reference, 12 bytes for objects header, and 16 bytes for array object header (which are typical values present in 64-bit Java) using the default -XX:+UseCompressedOops.
We have seen that boxed arrays take up a lot more memory than their primitive versions and store references to objects. This introduces an extra level of indirection. While programming in Java, we often find ourselves using simple objects which represent points, vectors, matrices and similar. Let’s say we would like to store thousand 2D points:
public class Point { public float x, y;}
Point[] points = new Point[1000];
As with the previous example, Point is an object, and the array ‘points’ is an array of references. Each point has two floats, thus stores 8 bytes of data. We have to add the object header of 12 bytes, which adds up to 20 bytes. We are not done yet. 20 bytes is not aligned to 8 byte boundary, thus we add extra 4 byte padding. Thus, each point takes 24 bytes. In total, array ‘points’ takes 28016 bytes, while the actual data is stored in 8000. The overhead is ~3.5x.
If performance is our top priority, we can store the one thousand points in a float array such as:
float[] pointsXY = new float[2000];
However, by doing so we reduce the code readability and maintainability, not to mention the inability to pass individual points to a function without referencing the array that stores them. What if we are presented with the following point definition?
public class PointId { public int id; public float x, y;}
To store one thousand points within primitive arrays would require creating two arrays: one for integers and another for floats.
Can we store data in Java as compactly as with primitive types yet with class-like syntax?
Project Valhalla is an OpenJDK project which aims to bring value types to Java. While Project Valhalla would solve the above mentioned issue for small objects, it is not released yet, and release date is still unknown.
SEE ALSO: Challenges and timelines for Project Valhalla
With the use of an open source library, we can define PointId as:
@Struct
public class PointId { public int id; public float x, y;}
By adding @Struct annotation, PointId is no longer an object but a struct. We can use structs in similar ways to java objects.
PointId[] points = new PointId[1000];
points[7].x = 10;
PointId p7 = points[7];
An array of thousand points with id takes, ~12040 bytes since PointId is no longer an object but a struct. There are some differences between structs and objects, such as structs do not use constructors. If PointId were an object the above code would throw a NullPointerException.
To use struct types in Java, you will require this library.
Are your DevOps initiatives burning or falling flat? For developers that need a little help in the kitchen, here are JAX London speaker Liat Palace’s four easy steps to whip up a fruitful DevOps implementation.
One of the major challenges facing every significant change or transformation is how to start. This is especially true in the case of DevOps implementation.
There are many definitions of DevOps. From my experience, for example, if you ask for a definition from ten different people you will get ten different answers. Moreover, every implementation is considerably different from all others, and so is the focus and need of the relevant organization.
During my journey through Amdocs, I learned a number of things, including a tutorial for facilitating a successful DevOps implementation. I believe this tutorial can help your organization as well.
My recommendation is to use top down and bottom up approach. The high level organization strategy should be defined top down, with KPI’s for success and vision. The bottom’s up approach is making sure that each project should articulate its own DevOps journey according to their needs.
In general, the following guidelines are suitable for a DevOps implementation at project level once the organization strategy has already been set. It is divided to four major steps – two of them are performed one time only while the other two are iterative.
Define and articulate what DevOps represents for the project.
Start by increasing your knowledge.
Where are you now? How far away are you from where you desire to be?
Tip: Make sure to give attention to the assessment aspects as well as to technical improvements. Many organizations tend to focus on the latter only, overlooking the fact that according to Gartner, only 8% of the DevOps transformation consists of technology.
Who will lead the change?
Strong sponsorship is a must. There is no chance to lead any transformation without having the backing and support of management.
Implementation strategy is an important phase that helps create the vision and mission. This is the responsibility of the project’s leadership.
What are the problems we are trying to solve?
What areas are we going to touch?
Obviously, when talking about DevOps we know that we are dealing with processes, technology and culture. As we look deeper, however, and take a view of the entire system, we inevitably learn that there are many additional factors that influence the delivery.
For example, HR processes; how can we combine operations and development efforts when there are different HR processes supporting the implementation? Funding models are another example; can we work quickly if our contracts do support these models? Or if or sales reps do not know how to sell them?
How should we facilitate the change mechanism?
Quick wins
As the journey will be long, we need to be on the constant lookout for low hanging fruits that will result in small successes and motivate the teams. Value stream mapping can definitely help decide where to make the right investments.
In order for the journey to last and to ensure that our capabilities will continually improve, we must create a strong change driving mechanism.
Backlog
I recommend that teams maintain all DevOps improvement items in a backlog, estimated and prioritized by the value and waste they are solving. I also suggest that they should use a single tool to achieve visualization of the work and focus.
Cadence
My recommendation is to be aligned with the rhythm of the project or organization, since some of the backlog items might become scrum teams’ backlog items. (This includes things like automation, tools, architecture, and so on.)
Ceremonies
By following all of the above steps, you will have a good facilitation mechanism that will help you implementing improvements. You will know that you are in the right direction once you can inspect impediments from the ground and solve them as part of the DevOps improvement mechanism.
Our goal eventually is to create a continuous improvement mechanism.
Liat Palace will be delivering a talk at JAX London 2018 on Tuesday, October 9 as part of the DevOps & Continuous Delivery track that explores the idea of a DevOps Cookbook in more depth and detail.
love me like you do
Are you ready for the next stage of development? JAX London speaker Tracy Miranda explains how GitOps is ready to save the day in the event of complete system failure.
Cloud development led us first to DevOps. Some are already declaring DevOps as the new legacy and that a new paradigm is needed to meet the needs of cloud-native development. Enter GitOps. Initially proposed by Alexis Richardson, GitOps offers to be the new community of practice where we push code not containers and perform operations by pull request.
The most valuable concept that falls under the GitOps umbrella is the Mean Time to Recovery (MTTR). Simply put: If your system failed completely, how long would you take to get it up and running from scratch? Using GitOps, Alexis’ team can recover in five minutes. Five minutes. That underlines the ultimate promise of GitOps. Let’s face it; you don’t have to look hard for many systems brought down by system engineering failures. For example, there’s the recent Google Big Query outage.
The key to optimizing your MTTR is treating your ops configuration as code. No more hand-coded magic setup that was once done by somebody who may or may not be at hand when the system fails.
JAX London “DevOps & Continuous Delivery” track
Interested in learning more about GitOps? Tracy Miranda will be at JAX London this October. Her talk, “Gitops, Jenkins & Jenkins X” is a part of the DevOps and Continuous Delivery track. This track is all about today’s technology challenges, as well as the dos and don’ts and ups and downs of modern software architecture. Join us at JAX London!
For Jenkins users, this is achievable using the configuration-as-code (CasC) plugin. This is a relatively new plugin that is currently under very active development. It lets you configure your Jenkins setup declaratively using Yaml. This provides configuration for Jenkins initial setup to give you a fully working master. It includes configuration for pipeline jobs as well as Jenkins plugins since CasC supports most plugins out of the box.
GitOps and mean-time-to-recovery become even more crucial when it comes to running scalable, high availability systems. Typically, these systems will be running in a cluster or perhaps in the cloud. Many are standardizing on using Kubernetes. Enter Jenkins X. Jenkins X rethinks CI/CD in the cloud with a focus on making development teams productive through automation, tooling and best practices. Naturally, these best practices focus on GitOps, using git as a source of truth and built-in automation to ensure your mean-time-to-recovery is a priority from the start.
Join me at JAX London to see GitOps in action as we aim to get your mean-time-to-recovery optimized with Jenkins and Jenkins X.
biugfw
Thanks to Article 17 of the GDPR, organizations using event sourcing need to double check and make sure they are following the rules. JAX London speaker Michiel Rook explains a few simple fixes to make sure performance isn’t impacted by compliance.
Recently, the EU General Data Protection Regulation (GDPR) came into effect. You’ve probably heard all about it or at least seen the absurd amount of ‘update privacy policy’ emails in your inbox. In any case, the GDPR attempts to regulate data protection for EU citizens. It is applicable to any organization that deals with EU citizens.
The GDPR has many implications for any software or organization that processes data. However, if you are considering implementing event sourcing in your application or have already done so, there are a few provisions in the regulation that have specific implications for event sourced applications.
One of the requirements of the GDPR is that an organization should be able to prove it has consent to process someone’s personal data. The consent must be very specific and it should be able to withdraw at any time.
For example, if you use the same personal data to send a newsletter, perform data analysis, and do re-targeting; you must have consent for those actions individually and support individual withdrawals of consent.
Demonstrating consent was given is easy when that consent was recorded as an event.
Without a doubt, the most interesting article in the regulation concerning event sourced applications is Article 17, the “Right to Erasure”.
“… the data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay and the controller shall have the obligation to erase personal data without undue delay…”
Whenever an Article 17 request is received, we first have to identify all the events that contain personally identifiable information of/for the requestor. Those events must either be sufficiently anonymized or removed altogether.
Individual events are generally considered immutable. After all, events are a reflection of history, records of something that happened. Multiple events form event streams, persisted in event stores that are append-only. In fact, some implementations are even backed by immutable storage such as Kafka or a WORM drive.
Append-only event stores with immutable events have their own special advantages. Events can be cached ad infinitum and form the basis of a stable audit log. Any mistakes, errors or missing information in previously persisted events are typically dealt with by applying corrective events similar to an accountant’s ledger or using upcasters.
However, when you need to erase or anonymize personal information, those strategies are no longer an option, as they’ll both leave the original data intact and you non-compliant!
One option is to create a copy of the original event stream, filtering out the affected events, or including anonymized versions of those events. When that process is completed, the original stream should of course be discarded.
Another way of dealing with this is using a mix of event sourcing and regular database tables. The idea is that personal information is longer stored inside events themselves, but in another database or storage solution.
Whenever an event is read by the system, the associated personal information is then retrieved from the secondary database and merged with the event.
Dealing with an Article 17 request is then reduced to finding the right entry in the secondary database and removing it. Any subsequent reads of the event will leave that event essentially anonymized.
The last technique I want to discuss keeps the personal information inside events, but encrypts that data using a unique key that is either associated with the event or an aggregate. The encryption key is stored in and retrieved from a centralized key management system. Events are decrypted automatically before they are handled by domain code.
Whenever an Article 17 request is received, the appropriate key is looked up and promptly forgotten, i.e. removed. This renders the personal information unreadable and effectively removed.
This was just a quick overview of some of the ways organizations can continue to use event sourcing while remaining compliant to the new GDPR regulations. If you’re interested in learning more about this topic, come join me for my talk “Forget me, please? Event sourcing and the GDPR” at JAX London this fall!
Michiel Rook will be delivering a talk at JAX London 2018 on Wednesday, October 10 as part of the Software Architecture & Design track. His talk goes more into detail about the effects of the GDPR and how it interacts with enterprise development.
love me like you do
Cryptocurrency may have gotten all the buzz last year, but the underlying technology is what we should really be talking about. In this article, Blockchain Technology Conference speaker Vinita Rathi explores why blockchain and distributed ledger technology platforms deserve our attention.
Cryptocurrency was a massive buzzword across all sectors of technology and finance last year, but since then the whole cryptocurrency hype has seen a downfall. While the future of cryptocurrencies is still unknown for the most part, its dominance in real economy is yet to be understood. However, I would like to discuss here is the underlying technology that cryptocurrencies are based on, specifically Blockchain and various distributed ledger technology platforms such as Hyperledger, Ethereum, NEM, and more.
Recently, I was approached by a firm that is in the process of digitizing medical records of citizens of an African nation. They wanted to not only control access to the medical records via biometric recognition and authentication, but also put technology in place to ensure that the medical records were immutable once they were in the system. The key points for consideration were:
While we looked at several private blockchain solutions like Waves, NEM, or Corda, we noticed that Hyperledger Fabric stood out amongst all of them time and time again for both this use case as well as others we had been working on.
The key decisions that ruled in favor of Hyperledger Fabric were:
Most of the promising solutions that I come across lack the maturity needed for such a big implementation. There are hardly a few systems that are being used in large scale enterprise production environments.
One of the challenges worth pointing out is the concurrency challenge we have had around using Hyperledger, especially in our use case. We needed to provide a provision in the system to allow write and read multiple operations on the same object concurrently.
In this situation, the use case is:
Even though the chances of same object being written simultaneously was rare, we still had to account for a situation where the write request for the object was pending while read access was requested. This could potentially invalidate the state of the read.
This is how the first version of architecture looked like:
Figure 1: Our architecture.
The second challenge we faced was the deployment. Hyperledger focuses heavily on Docker. Luckily, our DevOps team at Systango was accustomed to using it in pretty much in every project we undertake.
While we have deployed the system in production, its usage is still at a very early stage. It’s hard to comment on the system’s scalability as of yet. However, in the sandbox environment, we tested it with 1,500 concurrent transactions and it performed fairly well.
For an in-depth review of the architecture, deployment challenges and key learnings of other projects, please join me at my session at the Blockchain Technology Conference. I’ll be going over all this and more there.
Vinita Rathi will be delivering a talk at Blockchain Technology Conference on Tuesday, November 20 that goes over what blockchain is, functional blockchain solution designs, and the five pillars of an enterprise blockchain solution design.
“Architecting an enterprise blockchain solution: Key considerations”
you do
Companies like Facebook, GitHub, and Shopify are using GraphQL as an alternative to RESTful Web Services. We talked with Christian Schwendter at MobileTech Conference about both approaches and their differences.
JAXenter: Hello Christian and thank you for taking the time to speak with us. GraphQL is supposed to be something like an alternative to the more traditional approach with RESTful Web Services. Can you briefly describe what exactly GraphQL is?
Christian Schwendter: Hello Dominik, of course, I can do that! GraphQL is primarily a flexible query language. It’s also the first information you’ll receive from the official GraphQL website, titled “A query language for your API“. And this describes GraphQL’s core quite well because its focus is set on the client-side and their data needs, i.e. so that it’s possible to query the required data for clients in a simple and flexible way.
The flexibility, which the client gains in the process, is also the most exciting point in this regard. He can now easily and precisely define which data (including relations) he needs from the server. Thus GraphQL enables a very client-orientated view or rather a very user-case oriented one. The client can also use a request to load just the required data, which it needs for a use-case or a screen template (no under- or overfetching).
JAXenter: How is GraphQL different from RESTful Web Services?
It’s not the right approach to blindly trust GraphQL to solve every problem.
Christian Schwendtner: I get this question quite often, when people come into contact with GraphQL for the first time. Before answering the question, I would like to point out how I think that it’s important to take a closer look at the term RESTful Web Services. The term is well-defined but often used very inflationary in practice. We like to call any service REST that uses resources, HTTP verbs and HTTP status codes or provides data in JSON. And that’s actually not the case. One should only speak of REST or Restful Web Services when all of the defining criteria are met, which were defined by Roy Fielding, the inventor of REST.
And one important criterion is the HATEOAS or “Hypermedia As the Engine Of Application State”. It’s this HATEOAS which is not at all or not sufficiently implemented in many services, which are supposedly RESTful. There’s a good way to get an understanding of “how RESTful” a service is. And we can make use of the Richard Maturity Model for this distinction.
The model defines different levels depending upon which aspects of REST are implemented. A service, which only matches some criteria of REST (e.g. “only” resources, HTTP verbs, status codes but not HATEOAS) is often called REST-ish, REST-like or REST-wannabe. I am aware of how this does sound. Like splitting hairs, but I think it’s important to be aware of what you are comparing GraphQL to – especially in comparisons with RESTful Web Services.
Talking from experience, GraphQL is a good alternative to the second level of the Richardson Maturity Model (the use of resources, HTTP verbs, status codes), but not a good alternative to “real” REST services (keyword HATEOAS).
A major difference is the mandatory use of a schema. A GraphQL schema defines all types, which can be queried, and all operations, which can be performed (mutations). Due to this build up, it is possible to check for the query’s correctness at build time of the application.
GraphQL does also offer some interesting concepts – respectively fragments: On the client-side, you can use GraphQL fragments to split a large query into smaller parts. This can be used to define the query of the respective use case (or mask) not at one place, but to define the data requirements per UI component. This has the advantage of data needs being defined where they are best known. A (large) query can then be assembled from the fragments, which contains all data requirements of all components of the respective mask and the required data can then be loaded using a single request.
Another interesting thing about GraphQL are subscriptions. With a subscription, a client can register for an event on the server to be informed about changes (e.g. using WebSockets). This makes it very easy to implement push functionality in applications. You could say that GraphQL is an opinionated approach — that means that certain solutions are provided for certain problems, so you get a complete solution with all advantages and disadvantages.
An indirect weakness of GraphQL is that it’s sometimes advertised as “REST 2.0” or “the better REST”.
JAXenter: What are the weaknesses of GraphQL?
Christian Schwendtner: Many are impressed by the simplicity of defining client-side data needs in GraphQL. But you shouldn’t forget the server side either. Due to the freedom you gain at the client, you often have to invest more thought in the server to avoid running into performance issues.
Since the client can flexibly define its data requirements — in line with the motto “wish for something” — special attention must be paid to performance on the server side. And this should not happen too late in the project.
An indirect weakness of GraphQL is that it’s sometimes advertised as “REST 2.0” or “the better REST”. And I don’t think that’s true. Different things are often compared here. In my opinion, GraphQL is not an alternative for a “real” REST service, but for many REST-ish services, as you often see them. They are more concerned with making data available to the client in a simple and flexible manner.
JAXenter: Let’s say I developed an app which still relies on RESTful Web Services. How hard would it be to upgrade to GraphQL? Is it really doable?
Christian Schwendtner: I think RESTful Web Services and GraphQL can complement each other well. GraphQL can be used as a gateway to give the client a consistent view of the data. The GraphQL gateway wouldn’t load the data then itself (or rather from a database), but call the existing REST endpoints and function as an aggregation point. This way, you could provide a client with the GraphQL flexibility and keep the actual services as RESTful services. This method is also very sensible for microservice architecture.
With the beginning of a new project, you can of course consider working solely with GraphQL. In this case, I would make the decision dependent upon the specific problem. It’s not the right approach to blindly trust GraphQL to solve every problem.
JAXenter: Do you have any advice for developers who are interested in GraphQL?
Christian Schwendtner: GraphQL is an interesting approach, but not a panacea to all our problems. And we tend to look quite often for this kind of solution. We forget to deal with the one specific problem.
And suddenly we end up at that point where “We had a solution, but it didn’t fit the problem”. I think that using GraphQL can be very useful, but it isn’t a magically fix for our problems. During your daily work as a developer or architect, you will have to make decisions for a technology quite often. And sometimes it’s important to make a well-founded decision against a technology, even if this isn’t as frequent. Always keep this in mind and don’t forget it.
JAXenter: Thank you!
The Agile manifesto says that “the most efficient and effective method of conveying information to and within a development team is face-to-face conversation”. Here, JAX London speaker Erica Tanti explains how you can face your public speaking fears and communicate better for development success.
If you want to learn a new programming language, tool or technology, it’s useless to sit around all day reading articles. You have to roll up your sleeves and start coding. Similarly, I could direct you to several excellent books, articles and other resources about public speaking, or invite you to go watch past JAX London talks and analyze how the speakers behaved on stage. The fact of the matter is if you want to learn how to be a better speaker, you’re going to have to start speaking.
This is the point where I lose most people. “I can’t!”, they say with fear in their eyes. People think speaking is a magical innate ability – you either have it, or you don’t. Believe me when I say that this is something you can learn and get better at, with practice. Today, I’d like to invite you to start practicing. If you want to start speaking but don’t know how, here are some ideas from easy to advanced.
I’ve found that, more often than not, people are better at speaking than they think – what prevents them from speaking is their fear. So face your fear head on, pick one of the above (or more!) and start practicing.
Erica Tanti will be delivering a talk at JAX London 2018 on Wednesday, October 10 as part of the Agile & Communication track. Her talk explores in depth how to become more comfortable with public speaking and why improving your public speaking skills is important.
public speaking
Women are underrepresented in the tech sector —myth or reality? Last year, we launched a diversity series aimed at bringing the most inspirational and powerful women in the tech scene to your attention. Today, we’d like you to meet Erica Tanti, Software Engineer and a speaker at JAX London 2018.
A research study by The National Center for Women & Information Technology showed that “gender diversity has specific benefits in technology settings,” which could explain why tech companies have started to invest in initiatives that aim to boost the number of female applicants, recruit them in a more effective way, retain them for longer, and give them the opportunity to advance. But is it enough?
Women in Tech — The Survey
We would like to get to the bottom of why gender diversity remains a challenge for the tech scene. Therefore, we invite you all to fill out our diversity survey. Share your experiences with us!
Your input will help us identify the diversity-related issues that prevent us from achieving gender equality in technology workplaces.
Without further ado, we would like to introduce Erica Tanti, Software Engineer and a speaker at JAX London 2018.
asap
Erica Tanti will deliver one talk at JAX London 2018, in which she will teach attendees how to improve your speaking skills, how to build up your confidence, how to prepare presentations and more.
Erica Tanti is a software engineer at a Fintech company in Malta. Additionally, she is a committee member of the Malta Toastmasters Club which has opened doors to a number of other speaking engagements, the highlight last year being representing Malta at the JCI Public Speaking European Committee in Basel, Switzerland.
When I was a kid I loved playing games on my father’s commodore 64. However for most of my childhood I had always looked at computers as a thing with applications in it I could use. Aged 14 during the summer I decided to pick up my computer studies text book and learn how to code ahead of learning programming at school. I was instantly hooked and told everyone I knew I wanted to be a programmer when I grew up. They told me I might change my mind as I grew older but I haven’t turned back since.
From that moment on I studied for a B.Sc., then a M.Sc. in computer science from the University of Malta. Whilst doing my B.Sc. I got an internship at Ixaris, a Fintech company with offices in Malta. I have been there ever since.
My family was always super supportive and I also had a super supportive Computer Studies teacher at school, Ms. Pam, who put up with my million and one questions and was always there to help. As for role models, I distinctly remember the first time I read an article on Ada Lovelace. I remember being so surprised that the “first computer programmer” was a woman. Before, I had always seen and heard of men pioneering the computing industry. Now that I’ve educated myself a lot more on the history of software development, I realize the irony of that statement but back then I had no idea.
Regarding obstacles, earlier this year I attended and spoke at a career day for young girls which focused on STEM careers. The career day had a panel (which I wasn’t on) where a number of women spoke about their experience being a woman in the STEM industry. I was dismayed to hear the number of stories of difficult situations these women had to encounter because of their gender. I am not one of those stories. I know that I am in a position of great privilege to have had such an easy time in the industry. However it is important for me to tell my story as it happened. I think that if I had been an attendee at that career day I would have felt discouraged rather than filled with enthusiasm. So I hope that my story of “actually not that bad” fills people with hope that a STEM career doesn’t necessarily have to be one filled with a lot of obstacles.
I’m currently a Software Engineer at Ixaris, a Business to Business payments solutions company, at their offices in Malta. At Ixaris, I work on a variety of technologies and systems, from front end to deep within our transactions engine.
On a typical day, I am building new features with my team, alone or pair programming or working on initiatives like improving performance or code quality. I also organize weekly tech talks giving everyone an opportunity to share new and exciting knowledge.
This is a really difficult question as a woman who is in tech and has always liked tech and wanted to be part of this world! I think the problem starts with young girls (and their parents, teachers etc.) seeing past the stereotypes and seeing tech as something which could be a good fit for them. I can honestly count the times that people have been negative about my being a software engineer on one hand. Out in the industry, I’ve found a supportive community which believes in my abilities (sometimes more than even I do) and pushes me to grow and be better year on year.
I think the biggest obstacle we currently face is the idea that equality is here already. The fact that we’ve come so far in the past few decades doesn’t negate the fact that equality isn’t here yet. Yes, it might be true that, in Europe at least, girls have an equal opportunity to education and women are treated well in the workforce. However, women are still underrepresented in STEM and leadership positions and we need to actively help change that. This isn’t something which will just magically happen overnight without lifting a finger. I’ve had to explain this more times than I can count.
I believe that STEM in general, and software development in particular, is the future. I believe that because of this it is of vital importance that there is equal representation of women and other marginalized groups in STEM in the same way there should be equal representation of women and marginalized groups in politics. Knowing how to use computers and code already holds a certain power and will continue holding more power in the future. That might seem excessive, and if so I suggest you read the following article as I have very similar thoughts on the subject.
Change is slow. I don’t know when we’ll see results but I know we’re not there yet and the next generation won’t be there either, despite the increased awareness on the importance of STEM. I say this from my experience participating in career days for the 16-18 age groups where we are still far off from an even split.
If you’re just starting out: There are many career options in tech so make sure to see what your options are and what sounds interesting for you! If you’re interested in being a software engineer specifically, then start coding – practice is the key.
If you’re in the industry already: My motto is: “Do no harm, but take no shit” – There’s nothing special needed to be a woman in tech and don’t let anyone tell you differently. At times I will suffer from impostor syndrome and I counteract this by pushing myself to try things out and I believe this technique has opened the door to many opportunities.
Don’t miss our Women in Tech profiles:
We’re back with a new programming pub quiz! This week, we’re testing your knowledge about Deeplearning4j. Do you know everything there is to know about this machine learning library for the JVM?
It’s time for another pub quiz. Today, we’re testing your knowledge of Deeplearning4j trivia! We got a little help from Skymind for this one, so you know it’s good.
1. What was the original name of DataVec, DL4J’s ETL library?
a) Canova
b) DataFlow
c) Arbiter
d) Aristophanes
2. Which important open-source library created by a Skymind engineer is not part of Eclipse Deeplearning4j?
a) Apache Hadoop
b) TensorFlow
c) JavaCPP
d) Arrow
3. Which co-creator of Deeplearning4j was in first grade when the Java language was created in 1995?
a) Josh Patterson
b) Adam Gibson
c) Chris Nicholson
d) Alex D. Black
e) That other dude…
4. What is the DL4J community’s mascot?
a) Tony the Tiger
b) A goshawk
c) An oloid
d) A mastodon
5. What does the DL4J user support channel on Gitter most resemble?
a) A firehose
b) The Mississippi river, endless and meandering
c) A corset: great support, but sometimes painful
d) All of the above
6. Which version of Deeplearning4j first offered auto-differentiation?
a) 0.9.1
b) 0.4-rc2.2
c) 1.0.0
d) What’s auto-differentiation?
7. Which Python deep learning library counts Skymind as its second-largest contributor?
a) Tensorflow
b) PyTorch
c) Keras
d) Meh, Python…
8. What was the first neural network implemented in Deeplearning4j?
a) Convolutional network
b) Restricted Boltzmann machine
c) Neural Turing machine
d) Multilayer perceptron
e) Skynet.
9. Which IDE does the DL4J community recommend?
a) Eclipse
b) Netbeans
c) IntelliJ, but not the latest community version
10. Which build tool does the community recommend?
a) Gradle
b) Maven
c) Ant
d) Ivy
e) Who needs build tools? Let’s add JAR files like it’s 1999.
1. a) Canova
2. c) JavaCPP (Created by Samuel Audet).
3. b) Adam Gibson
4. c) Their mascot is a mathematical object called an oloid, which is formed with two conjoined circles at perpendicular angles to one another.
5. d) All of the above.
6. c) 1.0.0
7. c) Keras
8. b) Restricted Boltzmann machine
9. c) IntelliJ
10.b) Maven
How well did you do? Do you know your Deeplearning4j trivia?
0-3 correct: You’re just a machine learning beginner.
4-5 correct: You’re pretty solid in your Deeplearning4j trivia, but you still might need to pay a little more attention to the details.
6-8 correct: Nice! You really know your stuff!
9-10 correct: You are a machine learning master.
Programming Pub Quiz: Have you tried our other pub quizzes? Test your knowledge of other topics!