Have your Machine Learning experiments been floating around Jupyter Notebooks and your performant trained models never left your local machine? If YES, this tutorial is for you to get to learn how to serve your model in production.
If you also want to find out more about Streamlit, FastAPI and Docker and why to use them, keep on reading.
In this article I will cover:
Before getting into the implementation I will…
With the emerging technological and digital adoption and the power of Artificial Intelligence, the use of data has become common and crucial in the tailoring of IT solutions. But the challenges surrounding data semantics, access and processing is still rising. For this reason standardisation is key.
I want to build an application which could be consumed by providing an endpoint or an interface so that the consumer could send data to it. To make this consumption scenario interoperable, it was necessary to come up with a way to standardise this endpoint/interface. This is what this article will be about.
Dockerizing is the process of packing, deploying, and running applications using Docker containers.
Docker provides the ability to package and run an application in an isolated environment called a container. Containers are lightweight and contain everything needed to run the application, so you do not need to rely on what is currently installed on the host. You can easily share containers, and be sure that everyone you share with gets the same container that works in the same way.
In this article I will cover the following:
Apache NiFi is an open source software for automating and managing the data flow between systems. It provides web-based User Interface to create, monitor, and control data flows. Its main components are FlowFiles, which represent each piece of data and Processors, responsible for creating, sending, receiving, transforming, routing, splitting, merging, and processing FlowFiles.
This article covers the steps to perform the conversion of files in JSON format to CSV using ConvertRecord processor. There are many other ways to perform Data transformation from JSON to CSV but this is the most straight forward and simple way.
Let’s get into it!!
This article covers all you need to know about Spark and its ecosystem with a detailed dive into of the main concepts of Spark and its architecture.
Apache Spark is an open-source distributed general-purpose cluster-computing framework. Spark provides an interface for programming entire clusters with implicit data parallelism and fault tolerance.
Apache Spark is used for big data workloads. It utilizes in-memory caching and optimized query execution for fast queries against data of any size. Simply put, Spark is a fast and general engine for large-scale data processing.
Apache Spark is the leading platform for large-scale SQL, batch processing, stream…
This is the story of AI Hack the biggest AI hackathon in Tunisia, Africa and Middle East Region.
“AI Hack Tunisia 2019” is a 3 days AI hackathon in which many dreams came true. It gathered the AI community in one place for one goal & made people get inspired from the best AI Experts and mentors coming from Google, InstaDeep, Expensya and other entities..
The first 20 hour-hackathon is an individual Machine Learning challenge. This challenge is intended to individuals passionate about AI.
It all started last year November 2018, 2 months after the deadline of the Campus Director’s application, I got admitted somehow, but i call it fate or things happen for a reason.
Once upon a day in summer 2019, I had a coffee with a friend who works with Hult Prize Foundation, before his flight to London to attend the Global Accelerator, I told him “I wish i can come with you, that would be a dream that comes true” , He said: “ Rihab, buy your ticket and let’s go”