Download der Präsentationen
Monday - 17. June 2019
An Introduction to Deep Learning with Keras - train your own Image Classifier with Deep Neural Networks [SOLD OUT]
In this workshop, you will learn how to create a neural network that can classify images into their respective classes (Image Classification). For this, we'll be using the Fashion-MNIST dataset. While starting with a very basic, shallow network and the underlying ideas, we gradually add more complexity and introduce convolutional layers to improve the performance of our model. At the end, you will have a solid understanding of the underpinnings of deep learning and have all the tools needed to start applying these to your own projects.
Workshop: Deploying MQTT Container and Orchestration Software
This hands-on workshop demonstrates how to realize a deployment using the IoT de facto standard protocol MQTT in combination with state-of-the-art DevOps technologies.Together with the participants, Anja and Florian will build up a fully automated deployment pipeline using OpenSource MQTT software, Docker, and OpenShift. They will also be sharing their experiences collected during many years of working on countless production deployments using MQTT as the transmission protocol.
Machine Learning 101++ using Python
Machine Learning is often hyped, but how does it work? We will show you hands-on how you can do data inspection, prediction, build a simple recommender system, and so on.
Deliver IoT in a Day
IoT am Beispiel von Maschinenbau muss nicht kompliziert sein. Viele Maschinen liefern heute schon Daten über standardisierte Schnittstellen. Doch wie kommen die Daten von der Maschine in die Cloud und welche Vor- und Nachteile bringt dies mit sich? Anhand von praxisnahen Beispielen wird mit Hilfe von Azure IoT Central und Azure IoT Edge gezeigt, wie Maschinen einfach an die Cloud angebunden werden können und welche Möglichkeiten es heute schon gibt. Dabei bekommt jeder Workshopteilnehmer zum Simulieren einer Maschine ein MXChip-IoT-Kit und kann dieses mit der Azure-Cloud verbinden. Einmal in der Cloud angekommen, lernen Sie, wie sich Maschinendaten visualisieren, auswerten und überwachen lassen. Durch Werkzeuge wie Machine Learning lassen sich dabei aus den Daten auch weitere Informationen ableiten. Zusätzlich wird gezeigt, wie Sie Maschinen bequem aus der Cloud steuern und wie durch den Einsatz von Edge Computing kritische Module von der Cloud direkt auf die Maschine übertragen und dort ausgeführt werden können. Das Ziel des Workshops ist, jedem Teilnehmer innerhalb eines Tages ein IoT-fähiges Device inklusive Infrastruktur in der Cloud in die Hand zu geben, was auch noch nach dem Workshop als Grundlage für weitere Entwicklung zur Verfügung steht. Mehr Informationen: https://iotcon.de/deliver-iot-in-a-day-workshop/
Tuesday - 18. June 2019
The Ethics of AI – dealing with difficult choices in a non-binary world
I started working with user experience (UX) long before the term was even known. Over the past 40 years, I’ve encountered many issues that have disturbed me – from creating purposely addictive programs, sites, and apps, to the current zeitgeist for various design trends at the expense of basic usability. I have seen research that is faked, ignored, or twisted by internal company politics and by the cognitive bias of the design team. And I have seen countless dark patterns that suppress accessibility and diversity by promoting false beliefs and false security. Whenever we say, “That’s not my problem,” or, “My company won’t let me do that,” we are handing over our ethical responsibility to someone else – for better or for worse. Do innocent decisions evolve so that they promote racism or gender discrimination through inadvertent cognitive bias or unwitting apathy? Far too often they do. We, as technologists, hold incredible power to shape the things to come. I would like to share my thoughts with you so you can use this power to truly build a better world for those who come after us!
Praxiserfahrung IoT: GSM, Sigfox, LoRa, NB-IoT und viele Salesversprechen
Die LIV-T GmbH betreibt zwei B2B-SaaS-IoT-Produkte, bei denen LPWAN-Technologien unerlässlich sind. Einen Füllstandmesser (OilFox) mit NB-IoT, Sigfox, LoRa und WLAN-Konnektivtät sowie ein Raumklimacontroller (Raumgold) mit GSM. In den letzten zwei Jahren wurden alle relevanten Konnektivitätsstandards getestet und auch in Serie produziert sowie an Kunden ausgeliefert. Dabei konnten viele Insights über die reale Verfügbarkeit und Anwendbarkeit der Technologien gesammelt werden. Der Vortrag zeigt die Vergangenheit, den Status quo und wagt einen Ausblick über die Themen Kundenanforderungen, UX, Produktionsherausforderungen, Netzabdeckung in Deutschland und umliegenden Ländern sowie Preisgefüge.
How do Chess Engines work? Looking at Stockfish and AlphaZero
Game playing is a classic discipline of AI and had a major break through in the 90s when Deep Blue defeated Kasparov and arguably became the world's best chess player. First, we will look which algorithms made that success possible and how they are still used within Stockfish, one of the leading chess engines. Here, we will cover Minimax and AlphaBeta pruning. However, the emphasis of this talk will be on Monte Carlo Tree Search and its advanced use in AlphaZero that relies on zero human heuristics and without even an opening library. You will learn how it trains using self play on a convolutional ResNet architecture. At the end, we will briefly look at a great game between Stockfish and AlphaZero and why the era of classic chess engines might be over.
Reinforcement Learning: a gentle Introduction and industrial Application
Reinforcement learning learns complex processes autonomously. No big data sets with the “right” answers are needed; the algorithms learn by experimenting. By using reinforcement learning, robots learn to walk, beat the world champion in Go, or fly a helicopter. This talk shows “how” and “why” reinforcement learning algorithms work in an intuitive fashion, illustrating their inner-workings by the way a child learns to play a new game. We show what it takes to rephrase a real world problem as a reinforcement learning task and take a look at the challenges to bring it into production on 7000 client in 42 countries all around the world. Our industrial application is based on siphonic roof drainage systems. It warrants that large buildings like stadiums, airports, or shopping malls do not collapse during heavy rainfalls. Choosing the “right” diameters is difficult, requiring intuition and hydraulic expertise. As of today, no feasible, deterministic algorithm is known. Using reinforcement learning we were able to reduce the fail rate of our existing solution – based on classic supervised learning - by more than 70%.
Embedded ML for continued Product Leadership in global Machine and Plant Construction
Machine Learning (ML), made possible by data in combination with Moore's Law, is driving revolutionary changes in our society. It deals with predictive analytics and enables the optimization of operational processes by avoiding unwanted situations. Cloud providers and IoT platforms have incorporated their own data analysis solutions, as well as ML frameworks and libraries into their program. However, many decision makers have a queasy feeling about sharing their production data with the outside world in the cloud. Using an edge solution on a standard PC directly in the plant or on the machine, data can remain within the firewall. If the Apple iPhone-X equipped with a ML chip has the ability to train its owner's images on the device and model the result for future access control, the possibilities for mechanical engineering developers in terms of ML-based edge applications is unlimited! While the lead of the US and Chinese data companies in the consumer sector cannot be overlooked, European machine and plant manufacturers will continue to expand their product leadership by exporting machines with embedded ML around the world.
Enterprise MQTT Deployments: Implementing Projects with the Standard IoT Protocol
Using the example of an MQTT-based large-scale IoT project which is already in production, Florian Raschbichler shares the lessons learned that have accumulated over more than five years. From the initial architectural phase, via the proper use of broker and client implementations, to debugging best practices after the go-live, typical pitfalls and established solution strategies are highlighted. The talk shows proven ways and means of implementing projects with the highest scaling requirements using the de facto standard protocol for IoT.
Productionizing Machine Learning Models: Lessons learned in the Hadoop Ecosystem
The deployment of machine learning models can be challenging, especially in the context of distributed systems. Although Python is the dominant language among data scientists, it can create friction when integrating with JVM-based tools such as Spark or managing application dependencies on clusters of heterogeneous machines. Many data scientists developing on such systems struggle with the subtleties of these challenges. This presentation will share lessons learned working on large-scale Hadoop clusters and examine the most promising approaches to alleviate common issues. In particular, we will discuss our experience with leveraging containerization to tackle the dependency management challenge from a data scientist's point of view.
Supercharging your forecasting algorithms with alternative data
When most organizations execute forecasting the simply look at their own historic data and hope to get it right. I will show you that are missing out if you are not bringing your key measurements into a bigger context. The world is various and especially when we talk about sales and financial KPIs, it is hard to capture all the factors that come into play. Including external and alternative data help you make better predictions. You only need to follow the key steps and avoid common pitfalls. In this talk we will help make more out of your models, employing a toolset varying from traditional time series forecasting to deep learning.
Machine Learning Models: solving Issues in Production and beyond
Given how much effort goes into creating a good machine learning model, one might assume that all of the hard work will be done once the model is finally ready for production. In actuality, taking the model to production is just the very first step. Once it is there, you also have to deal with issues that many would-be machine learning users don’t even consider when they start out. It doesn’t matter if you use sophisticated deep learning models, simple decision trees, or even the most basic linear regression model that Excel can produce, many of those issues will plague any system that builds models on data. In this talk, I will introduce you to issues like incorporating user feedback, concept drift, and unhealthy feedback loops. You'll learn how to recognize if these kinds of issues could affect you now or in the future, as well as what you can do to solve these problems.
Fast Machine Learning for Industry by OPC UA in, OPC UA out
Data needs to be cleaned before being able to be fed into ML algorithms. Outliers are deleted, faulty entries eliminated, timestamps aligned, inconsistencies removed, metadata added, and transformed into the proper format or structure. Even while using software tools, data cleaning typically represents up to 80 % of the overall project. If data is available in an OPC UA information model, data cleaning can be virtually skipped, thus cutting up to 80 % out of the duration of the industrial analytics project. Sounds like magic. How is that possible? OPC Unified Architecture (OPC UA) is the de facto standard for data exchange between products from different industrial equipment manufacturers. It allows modelling objects with any degree of complexity. Companies actively using OPC UA in their machines or production line implicitly make data available in a clean, structured way: Not only can they feed their data directly into ML algorithms; they can also put the results back into the OPC UA information model, for use by approved third parties. Hence: Fast ML for Industry by OPC UA in, OPC UA out!
IoT Product Service System Plattform: Intelligente Automatisierung im Smart Home
Wirklich smart wird es, wenn die Systeme verstehen, welches Bedürfnis der Mensch hat und eine intelligente Automatisierung stattfindet. Die Technologie ist zwar vorhanden, aber nur wenige setzen auch die Möglichkeiten der Hardware neben der intelligenten Software so ein, dass unser Alltag wirklich „smart“ wird. Mit Livy zeigen wir, wie intelligente Hardware die smarten Use-Cases im Smart Home optimieren kann.
The more data, the better the AI, isn’t it?
LEVERTON is developing and utilizing Artificial Intelligence to extract key data from corporate and legal documents. With large volumes of data coming from different sources it is necessary to maintain a healthy level of data quality for deep learning algorithms to learn. In this talk, we will showcase how LEVERTON tackles challenges related to data quality for good AI performance.
How to implement Chatbots in an industrial Context
Chatbots are among the most popular applications of artificial intelligence and natural language processing. Many companies are developing first prototypes to improve their customer communication and support functions. But running a tutorial or using a predefined cloud service is something fundamentally different than implementing a productive chatbot for a real world application that is scalable, continuously improvable, and integrated into the company’s core IT architecture. In my presentation, I will talk about the experience ThoughtWorks has made in several chatbot projects. First, we'll go over why the definition of the character of the bot is one of the most important design principles, followed by what the architecture of a bot look like. We'll also talk about what can be done with machine learning algorithms and where the limits of this approach are today. We'll go over why testing a chatbot is difficult, as well as how to run test-driven development and continuous delivery and how to continuously improve the performance of the bot with continuous intelligence. I will illustrate this talk with a live demonstration of one of ThroughtWorks' chatbots.
Vertx.io und MQTT: Asynchron im Internet der Dinge
Can UX demystify AI?
A machine learning solution is only as good as it is deemed by the end user. More often than not, we do not think through how results are communicated or measured. If we want users to trust and correctly interpret AI models, we need to make our models transparent and understandable. In this session, you will learn how to create explainable machine learning models using well-considered interactions and visualizations. We will look at some example cases in the health care sector and marketing sector, which illustrate how UX affects end users perception of AI.
IoT zum Anfassen - Von der Maschine in die Cloud
IoT am Beispiel von Maschinenbau muss nicht kompliziert sein. Viele Maschinen liefern heute schon Daten über standardisierte Schnittstellen. Doch wie kommen die Daten von der Maschine in die Cloud und welche Vor- und Nachteile bringt dies mit sich? Anhand von praxisnahen Beispielen wird mit Hilfe von Azure IoT Central und Azure IoT Edge gezeigt, wie Maschinen einfach an die Cloud angebunden werden können und welche Möglichkeiten es heute schon gibt. Einmal in der Cloud angekommen, lernen Sie wie sich Maschinendaten visualisieren, auswerten und überwachen lassen. Durch Werkzeuge wie Machine Learning lassen sich dabei aus den Daten auch weitere Informationen ableiten. Zusätzlich wird gezeigt, wie Sie Maschinen bequem aus der Cloud steuern und wie durch den Einsatz von Edge Computing kritische Module von der Cloud direkt auf die Maschine übertragen und dort ausgeführt werden können.
Bringing Data to the Edge
With growing data volumes and a need for real-time data, cloud-centric solutions, where data storage and processing is done centrally, do not suffice anymore in many IoT scenarios. Edge computing, meaning decentralized data storage and processing, bring back autonomous apps that are independent from and coexist symbiotically with the cloud. Egde computing brings real-time processing, reduced cloud costs, offline-capability and heightened security to IoT. It also enables clear data ownership in consumer scenarios - with data staying where it belongs, namely with the users and where it was produced. However, solving edge computing is still a challenge in many respects: From a developer perspective, as small devices come with many constraints. From a business perspective, because data ownership is part of many business models. What does edge computing mean for IoT? Could this be the solution for data security and privacy in consumer settings? How will the future of Edge Computing look like in a connected world? These are typical topics we face at ObjectBox that we would like to share with the audience.
Lebenslanges Lernen einer Maschine IoT Architekturen und Infrastruktur einer Smart Factory
Die Smart Factory ist auf dem Weg zur autonomen Produktion. Das Thema der KI wurde, wie auf der diesjährigen Hannover Messe zu sehen, zum Schlüssel der Integrated Industry erhoben. Und damit steht auch erstmals die Schlüsselapplikation für die Nutzung von wesentlich weitergehenden Netzwerken im Vordergrund. Das öffnet der Cloud auch in der Produktion die Tür. Doch die Produktion bleibt weiterhin ein absolut heterogenes Feld, dass nachhaltig bestellt werden muss. Dazu ist ein tiefes Wissen über Paradigmen der industriellen Automatisierung notwendig. Denn eines ist unbestritten, zwar wandern neue Services in die Produktion, die digitale Produktion ist orchestriert und dennoch im Kern auch automatisiert. Doch diese zwei Welten wachsen momentan mit Riesenschritten zusammen. Und es bahnt sich, was lange Zeit undenkbar war, sogar eine synergetische Koexistenz an. Wie diese aus Sicht einer Smart Factory aussehen kann und welche Rolle das industriel IoT mit dezentralen Edge Devices dabei einnimmt, um die Maschine zum lebenslangen Lerne zu ertüchtigen, zeigt dieser Vortrag auf.
Automated Hyperparameter Tuning
We'll philosophize about how machine learning is really nothing else than hyperparameter tuning, then take a look at how hyperparameter tuning is often done today and end with a deep dive on Population-Based Training (PBT) and Asynchronous Simple Halving Algorithm (ASHA), two simple but efficient algorithms for automated hyperparameter tuning.
Some Things I wish I had known about scaling Machine Learning Solutions
Building artificial solutions in the real world is a road full of challenges and very few answers. From training and regularization to deployment and monitoring, we are just starting to figure out the best practices used to build AI solutions at scale. This session presents a series of patterns and anti-patterns of large scale AI solutions. Covering the entire lifecycle of AI solutions from training to deployment, we will explore patterns and architectures that should be followed to build AI solutions at scale as well as not-so-obvious anti-patterns that can result in major disasters. To keep things practical, we will explore our AI patterns and anti-patterns through the lens of real customers building real AI solutions.
Predicting New York City Taxi demand: spatio-temporal Time Series Forecasting
Time series forecasting has always been an important field in machine learning and statistics, as it helps us to make decisions about the future. A special field is spatio-temporal forecasting, where predictions are not only made on the temporal dimension, but also on a regional dimension. In this session, we will present a demonstration project to predict taxi demand in Manhattan, NYC for the next hour. We'll show some of the basic principles of time series forecasting and compare different models suited for the spatio-temporal use case. Therefore, we will take a closer look at the principles of models like long short-term memory networks and temporal convolutional networks. We will show that these models decrease the prediction error by 40% as compared to a simple baseline model, which predicts the same demand as in the last hour.
How to track progress and collaborate in data science and machine learning projects?
In this talk we focus on practical guidelines and tips on how to set-up and maintain smooth collaboration in data science projects. You will learn how you can have your work organized around creative iterations, reproducible and easy to share with anyone. You will see how to easily track the code, metrics, hyperparameters, learning curves, data versions and more. We will also address mutual communication needs between data scientists and business people.
Up-close and personal: Hyper-Personalization using Deep Learning
Booking.com is one of the world's largest online companies, empowering the millions of people visiting our website or application to choose the right accommodation and travel experience among one of our 29M+ listings in 200+ countries. How can personalization be achieved in such a high scale, and to what extent does machine learning play a role? In this talk, I’ll cover the challenges and application of handling personalization using machine learning in such high scales. Join me as I walk through the personalization concepts using deep learning and other machine learning applications, the role of a business or product manager in building machine learning solutions, as well as the unique Booking.com experimentation culture that enables a measurable success.
Learning Rank Aggregation Methods
Rank aggregation is the process of combining multiple individually ranked lists into one robust ranking (consensus ranking). Recently, the analysis of ranking data has been in significant interest of the machine learning community. Preference aggregation methods are used in computational social choice, multi-agent systems, meta-search engines e.g. rank web pages, and real world collective decisions, for instance election systems. The focus of the project is to apply the versatile machine learning techniques into mechanisms of rank aggregation methods in order to predict the winner. The primary techniques mastered in this experimental study are learnability of voting rules by investigating machine learning algorithms as supervised classification tasks. With its different configurations, the set of agents (voters) have preferences (votes) over a set of alternatives (candidates). Taking as input the preferences of all agents (so-called profile), the mechanism framework determines the winner or an aggregated preference rank of all alternatives.Clearly, the rank learning problem has a strong impact on identifying the election's winner, as determining the winner in Kemeny’s voting scheme is NP-hard (over 4 candidates). The experimental study performs a comparison of several machine learning methods for Borda, Kemeny and Dodgson voting rules with the goal of establishing the best trade-offs between search time and performance.
TensorFlow Training on the JVM
While the ML world is almost completely dominated by Python these days, commercial projects often run on the Java Virtual Machine. In such a context, it may be necessary to combine a TensorFlow model with Java. To prevent code duplication and make accessing existing JVM infrastructures easier, this talk will show how to train and run a TensorFlow Model from the JVM.
Power to the People: Free Energy in the Internet Age
Electricity generation and distribution has conceptually and practically stayed the same from the age of Edison and Tesla until now. Even renewable devices were, to a large extent, forced into the mold of large centralized power stations pumping electricity into the grid at one end, with kWh being charged to the consumer at the other. But the internet is quickly changing this. Platform businesses such as Amazon and Airbnb have defined a new way of doing business, and energy is next. We are looking at the future in which electricity will be free, and where utilities will be providing it as a service. As IoT and energy come together, we will increasingly see virtual power plants, as well as convergence between electricity users and generators. This revolution in the energy industry will leave a vastly different business and technological landscape. Following an engaging TED-talk format, COO of GreenCom Networks, Peter Muller Bruhl, will shine a light on how IoT, connectivity, platform business models and distributed generation will radically impact both energy businesses and energy users. We will look at how IoT technologies now hold the key to a more efficient, cheaper and greener energy system.
Wednesday - 19. June 2019
NB-IoT deep dive: Power Saving and Cloud Connectivity in Practice
Cellular IoT is arriving in daily IoT business. Telco providers all over the world are deploying their NB-IoT and/or CAT M1 networks. In Germany, Deutsche Telekom has been the first mover and started NB-IoT deployment in Juni 2017, Vodafone has followed in August 2018. Still, the technology is young and questions need to be answered. mm1 Technology was one of the first companies to test and work within the Deutsche Telekom NB-IoT network and supported the integration into their Backend and Cloud system. This presentation will give an overwiew on the current NB-IoT situation in Germany, Europe and Worldwide and will share insights from our prototypes and developments with focus on energy efficient approaches and flexible integration into multiple cloud systems.
Deep Learning mit Small Data
Big Data ist der Treibstoff für Deep Learning. Aber was kann ich tun, wenn meine vorhandene Datenmenge zu klein ist, um die Parameter meines Machine-Learning-Modells ausreichend zu trainieren? Dies ist eins der größten Hindernisse auf dem Weg zum erfolgreichen Einsatz von Machine Learning. In diesem Talk werden für Deep-Learning-Anfänger und Machine-Learning-Praktiker die Herausforderungen von kleinen Datensets anschaulich erläutert. Anschließend werden Strategien vorgestellt, die auch mit Small Data zum Erfolg führen. Dabei wird gezeigt, wie ich meinen Datensatz mit Hilfe von verschiedenen Data-Augmentation-Verfahren vergrößern kann. Auch der Einsatz von vortrainierten neuronalen Netzen und Transferlearning wird vorgestellt, damit Small Data nicht zu einem großen Problem wird. Vorkenntnisse: Grundlegende Kenntnisse der Funktionsweise von Machine-Learning-Verfahren Grundverständnis neuronaler Netzwerke Lernziele: Schwierigkeiten beim Einsatz von Machine Learning mit Small Data Überblick Data-Augmentation-Verfahren Transferlearning GANs für Data Augmentation
Unsupervised Learning with Autoencoders
Autoencoders are a neural network architecture that allows a network to learn from data without requiring a label for each data point. This session explains the basic concept of autoencoders. We'll go over several variants for autoencoders and different use cases.
Prevention of Claims with iot and ai.
Most insurance companies are focused on the hedging of risks. But what if damage does not occur because the policyholder is aware of the threat? If he has the chance to avoid a loss and is rewarded for his risk-aware behavior? Most insurance products available on the market do not reflect on the individual risk of the Policy holder and risk avoidance is not rewarded. But most of the customers like to keep their valuables instead of getting a compensation for a loss or damage. It is more like a safe guard in case something bad happens. Also the customers are price sensitive and are looking for ways to save money. IoT devices can collect environmental parameters and check them with predictive models. Smart Services can alert the customer in case of unexpected risks and provide advices to reduce them. In this talk we would like to highlight the possibilities of IoT, ML and data-centric decisions in order to survive in a highly competitive market in the future.
Deep Learning advances for Signal Processing
Deep neural networks are becoming irreplaceable for analyzing most kinds of data that humans supposed to exceed in - images, video, sounds, texts. Meanwhile, we are forgetting about another very important source of data: signals or time series. They may get less hype in public, but benefit a lot from applying deep learning comparing to classical approaches, especially for IoT. In this talk, we will review the sources of time series, what business goals are we solving while analyzing them, what are “old” tools for analysis, and how deep neural nets overcome them. We will learn the latest trends as well as bust some myths. Moreover, we will see how generative models can be applied to the signal processing as well. After this talk, you’ll be able to boost your current solutions in signal processing or time series analysis with deep learning. It will be also interesting for practitioners in other areas like computer vision or NLP, since we will discuss some concepts that are widely applicable. Previous experience with time series is not required, but some theoretical or practical understanding of machine learning and/or neural networks is preferred.
Mining Software Development History: Approaches and Challenges
Software development history, typically represented as a Version Control System log, is a rich source of insights into how the project evolved as well as how its developers work. What's probably more important is events from the past can predict the future. This talk gives a fun history of mining examples and presents some of the available tooling. Some of the topics we'll be going over include embeddings, dynamic time warping, seriation, and HDBSCAN,
"AutoAutoML" - Towards a Standardized Automated Machine Learning Pipeline API
Automated Machine Learning is rapidly becoming a pervasive tool for data scientists and machine learning practitioners to quickly build accurate machine learning models. Recent AutoML products from Google, Microsoft, AutoSKLearn, Auger.AI and others emphasize a programmatic API approach (versus a visual leaderboard) to applying AutoML. These products attempt hundreds of algorithm/hyperparameters to choose and tune the best predictive model. All of these products have a similar processing pipeline to achieve a deployed prediction capability: data importing, configuring training, executing training, evaluating winning models, deploying a model for predictions, and reviewing on-going accuracy. With AutoML, ML practitioners can automatically retrain those models based on changing business conditions and discovery of new algorithms. But they are often practically locked into a single AutoML product due to the work necessary to program that particular AutoML product’s API. To address this, we propose a standardized automated machine learning pipeline: PREDICT (Prediction, Review, Evaluation, Deploy, Import, Configure and Train). And we walk through a multi-vendor open source project called A2ML (http://github.com/deeplearninc/a2ml) that implements this pipeline for Google Cloud AutoML, Microsoft Azure AutoML, AutoSKLearn, H20 and Auger.AI. We then show building an application and trained model with multiple AutoML products simultaneously using this standard API.
Best practices for successful Industrial IoT projects
Industrial customers are making an important ‘data driven’ transition in technology and business model. This will lead towards completely new opportunities and value-added services for Industry 4.0 customers. The session deals with the special requirements of Industrial IoT projects, like global presence with high availability, efficiency of creation and maintenance of Industrial IoT solutions, local IoT operation without Internet connection, low total cost of ownership / TCO and advanced security.The speciality of the session is a live IoT show case which is used to demonstrate some preferred implementation approaches such as utilizing a leading global IoT platform, Serverless Computing as a game changer, IoT Edge computing and stream analytics.As a summary we provide some best practices for successful Industrial IoT projects.The session is aimed for educating decision makers about the specialities of Industrial IoT solutions and their preferred implementation approaches. Instead of a dry PowerPoint presentation, the listener can easily understand the implementation approaches for Industrial IoT projects by seeing a live show case, the configuration and the code. Ad-hoc questions will be answered by showing details of the live show case.
Smart Voice: Der Weg zum Use Case für digitale Sprachassistenten
"Was mache ich damit?" und "Was bringt mir das?" sind grundlegende Fragen, die vor jedweder Projektumsetzung beantwortet sein wollen. Digitale Sprachassistenten bringen neue Eigenschaften mit, die Unternehmen bisher nicht kannten und nun neu gelernt werden müssen. Diese Eigenschaften sind teilweise disruptiv und erfordern neue Sichtweisen, bieten aber auch hervorragende Möglichkeiten in vielen Segmenten. Der Weg zu den für Smart Voice geeigneten Anwendungsfällen ist dabei eigentlich garnicht so lang und machen in ihrer Entdeckung auch noch sehr viel Spaß.
Evolution 3.0: Solve your everyday Problems with genetic Algorithms
When was the last time you wrote an algorithm to plan your diet? As programmers, we do amazing things in our everyday job, but rarely do we use our knowledge at home. In this talk, I will introduce genetic algorithms. I'll share how I coded a genetic algorithm from scratch, using it to generate my weekly schedule and create a smart diet planer. We will go through the different stages of the algorithm and explore how they affect the algorithm’s solutions. Let me show you a different side of genetic algorithms and you will discover a new way to solve your everyday problems.
Datenanalyse im IoT: von der Maschinensteuerung bis zur Anomalieerkennung und -prognose
Geräte und Maschinen verschiedenster Art – von Leuchten über Aufzüge bis hin zu ganzen Fabrikanlagen – produzieren innerhalb kürzester Zeit gewaltige Datenmengen, die in Dashboards visualisiert und in Datenbanken gespeichert werden. Dabei geht ein wesentlicher Mehrwert der Digitalisierung verloren: Die Nutzung der angefallenen Daten und das Ziehen von Rückschlüssen auf künftige Situationen und Ereignisse basierend auf den Erfahrungen der Vergangenheit. Mit Hilfe von Cloud-Ansätzen lassen sich Datenanalysen zur Erkennung und Prognose anomaler Ereignisse realisieren, um so die Mehrwerte von IoT zu nutzen. Am Modellbeispiel einer SPS-gesteuerten Maschinenstraße zeigen wir den Weg der Daten von der Erfassung an der Steuerung bis zur Anomalieerkennung und -prognose in der Cloud.
Honey Bee Conservation using Deep Learning
Honey bee colony assessment is usually carried out via the laborious manual task of counting and classifying comb cells. Beekeepers perform this task many times throughout the year to asses the colony's strength and to track its development. As you can imagine, this is an extremely time-consuming and error-prone task. We will share our experience with the development of a tool for automatic honeybee colony assessment, the DeepBee. DeepBee is a tool that encapsulates an image classification pipeline using classical image processing methods and state-of-the-art Deep Neural Networks (DNN) for image segmentation and classification. To get to the final solution, we have compared 13 distinct DNN architectures and chosen the best model based on several metrics. We discuss the steps taken from image collection to the delivery of the final solution, highlighting the mistakes we have done during the process, the hurdles we overtook, and the lessons learned. The Project has been developed at the Polytechnic Institute of Bragança.
Hype or Reality: Face Anti-Spoofing Solution powered by Deep Learning
Biometric data is becoming a common replacement for traditional authentication systems — and face authentication is one of the most convenient and reliable ways to access data and resources. However, this idea must be backed up with means of preventing spoofing attacks, delivering the essential level of reliability. We've got the solution — based on deep learning, it has already been put to practice in our projects. This session will take a look at deep-learning-based anti-spoofing from a different angle, showing the optimal way of implementation.
Retrofitting: lessons learned from getting old things connected
In 2011, Valentin Sawadski's co-founders at tado° sat him in front of a heating system built in 1996 and told him to “get it online”. In 2019 Valentin still sees a lot of people working on retrofitting production equipment, heavy machinery and other “old tech” with connectivity and sensing capabilities. This Keynote is to share his biggest learnings with the audience. Not just the requirements of the project setup will be scrutinized,but also the general approach of how to tackle it. Moreover the most helpful tools and technologies will be highlighted.
Panel: Ask the ML Experts
In this panel session we’d like to wrap-up what we’ve learned in the sessions of the conference. We’d also like to give everybody in the audience an opportunity to raise additional questions to some of the speakers. In which way does Machine Learning change the way we produce digital solutions? What impact does it have on our customer’s business? Are there any „good old habits” we need to get over if we want to get into the „Machine Learning arena”? Let’s discuss.
Embedded Devices and Machine Learning Models
TensorFlow has already made it possible to use trained machine learning models on smartphones. TensorFlow Lite goes one step further and runs TensorFlow models on a Raspberry Pi. uTensor even puts AI on a microcontroller (MCU). They are small and cheap, but they are also energy efficient, slow, and have little RAM, which doesn't make it any easier. In my presentation I analyze meaningful use cases, possibilities, and ideas for machine learning on embedded devices. I'll also show some examples and live demos.
How we used Reinforcement Learning to solve the Abbey of Crime
Do you know the Abbey of the Crime? The abbey is an 8-bit game (for spectrum and CPC) that became the first RPG game in 3D (2.5D) in 1987. This game is a marvel from a technological point of view. In less than 120k, it is capable of storing the sound, images, all the logic of the program, and the data. Did you manage to finish the game without help? I do not know anyone who made it through without help. Despite its size, the Abbey of Crime is one of the most complicated games that has ever been developed. In the talk, we will tell how we designed and built an AI capable of playing alone and completing the game.
Does Deep Learning make Feature Engineering obsolete?
Machine learning is all about the data. If the data set is good – in most cases, you will not need a complex algorithm to solve your problem. However, if it is not, constructing an informative feature vector can be very challenging. At least that was the case for quite a while. Some people believe that with the increasing efficiency of deep learning algorithms, feature engineering has become less important or even obsolete. First we will have a quick catch-up on feature engineering basics. Than, based on real world cases, we will look into applications of FE in different subject fields and try to give an honest opinion on the question raised above, "is feature engineering obsolete?"
Everybody who has ever used a chatbot will have a hard time disproving this title. But why is it that chatbots are so bad most of the time? Is the technology really not ready? But why is it then that some chatbots do work?In this talk, I will dive deeper into the why and how. I will show you the biggest mistakes we made at Chatlayer, some of the failures we encountered, and the lessons we learned on how to build a good chatbot.
Deep probablistic Modelling with Pyro
The success of deep neural networks in diverse areas as image recognition and natural language processing has been outstanding in recent years. However, classical machine learning and deep learning algorithms can only propose the most probable solutions and are not able to adequately model uncertainty. In this talk, I will demonstrate how appropriate modelling of uncertain knowledge and reasoning leads to more informative results that can be used for better decision making. Recently, there has been a lot of progress in combining the probabilistic paradigm with deep neural architectures. In the past, computational probabilistic methods and tools lack the scalability and flexibility when it comes to large data sets and high-dimensional models. I will give an introduction to probabilistic and deep probabilistic modelling using the scalable probabilistic programming language Pyro which runs on top of PyTorch. I will also show you real-world examples where the results clearly benefit from a probabilistic approach.
IoT und Recht
Mehr und mehr stellt auch das Recht konkrete Anforderungen an sichere IoT-Systeme. Nachdem in der Vergangenheit in diesem Bereich der Zugang vor allem über Generalklauseln eröffnet wurde, beziehen jüngste Regulierungsbestrebungen auf europäischer und nationaler Ebene explizit auch IoT ein – dies nicht zuletzt deshalb, weil Wirtschaft und Gesellschaft immer stärker von der neuen Technologie durchdrungen werden und zunehmend Abhängigkeiten bestehen. Der Vortrag „IoT und Recht“ untergliedert sich in zwei Teile: Zunächst werden als vorwiegend öffentlich-rechtliche Anforderungen die Regelungsvorschläge zum Internet der Dinge im EU Cybersecurity Act und im deutschen IT-Sicherheitsgesetz (IT-SiG) 2.0 herausgearbeitet. Im zweiten Teil geht es sodann um die Frage, welchen zivilrechtlichen Anforderungen an die IT-Sicherheit – sowohl unter Vertrags- als auch unter Haftungsgesichtspunkten – Hersteller und Verkäufer genügen müssen, sollte das Produkt erst einmal auf den Markt gebracht sein – ein zentrales Stichwort hierbei ist die Pflicht zur Produktbeobachtung.
The Data Janitor returns
This talk is for the underdog. If you’re trying to solve data related problems with no or limited resources, be them time, money or skills, don’t go no further. This talk is opinionated and deals with GDPR, deep learning, and all the hype. How does data infiltrate the organization? Which roles come first, what problems do they solve and what problems do they introduce? A down-to-earth approach in this hype-driven environment to make decisions impactful and practical-based on real world experience, not product brochures and GitHub repository stars.