Path DevBlog 0x01
Hi, my name is Matthew Flannery, and welcome to the first post in our DevBlog series!
I made a short video about this blog post, and we're planning on making a few more! I hope you enjoy this.
During this series of posts, we'll dive deep into our latest technology updates and provide insight into why certain technical and architectural decisions were made, share our learnings with you and most importantly, share our progress. We'll try to keep things entertaining, including doing some written interviews with key members of our Development team, and perhaps some podcasts.
This first article provides an overview of the Path Network platform. We won't be focusing too much on describing what Path Network is, as that is expertly written within our whitepaper, available on https://path.network
The premise of the platform behind Path Network is simple: Allow as near to real-time data processing as technically possible. Accept connections to millions of Path Nodes concurrently at any time, and maintain connections to those Nodes for the purpose of distributing Jobs, and receiving Job Results.
The Path panel architecture follows microservices architecture design paradigms and at a very high level, consists of multiple backend services which provide webscale real-time data streaming capabilities, big data pipeline, and event analytics.
Our key technology adoption is as follows:
As you can see, our technology stack is diverse and pretty awesome to work with. Because we are dealing with millions of concurrent connections to Path Nodes at any time, distributed systems engineering and microservices architecture are paramount.
The panel frontend is a ReactJS SPA (Single Page Application) which invokes various backend RESTful/GraphQL serverless APIs (Kafka producers), which push jobs into a Kafka topic. These messages are processed into a Kafka Streams state store. This allows continuous query, read, write and processing of data into Kafka in real-time and at scale, and consumed by an API which provides jobs to the clients (Path Nodes).
Kafka is architected in a way that is highly available, fault tolerant, and autoscaling to meet intensive demand as the platform user base grows. This was a real challenge to achieve within Kubernetes, but worth the effort.
This allows us to compete with some of the world’s largest stream processing systems, and together with our distributed Path Node platform, puts us in a position to effectively deliver an unrivalled internet intelligence, monitoring, analytics, and real-time streaming platform.
Sitting between Kafka and the Path Nodes, is a microservices API layer written in Elixir and C++. This layer, too, autoscales within Kubernetes.
In both our Production and Non-Production environments, we use EKS - the managed Kubernetes service provided by AWS. So far, we're big fans of it. We leverage the power of Auto Scaling Groups within AWS so that the K8S nodes autoscale to demand.
Our engineering team is composed of software engineers that were hand picked by myself. These are either people that I have worked with in the past, or are people that are well known within the Sydney technology sector, for example people who have either spoken at or organise various tech focused Meetups.
We are all fascinated by technology, and extremely passionate about it. When you mix a group of people who love tech, with some really great pieces of technology and an excellent idea as the driving force behind it, what you get is something that's truly beautiful. It's really rewarding to work with such smart people.
Our engineering team combines the right amount of expertise in Software Engineering, DevOps, Network Engineering and Information Security. Throughout this blog series, each team member will introduce themselves by writing a blog post, so stay tuned and you'll get to know them all a little better!
Final thoughts and latest updates
That's really it for the introduction. There is so much more that I want to talk about, but have it planned for separate articles.
Before wrapping this up, i'd like to share a little bit of recent progress.
We are finally in the stages where we are performing real world tests, load testing the platform from thousands of nodes distributed globally across the world and conducting various kinds of application performance testing and network testing. If you come from a network engineering or ISP background and have ever used a "Looking Glass" router, you'll be thoroughly impressed by our platform - performing various routing and BGP tests, traceroutes, testing TCP/UDP endpoint connectivity from multiple ISPs across the globe and getting real-time data back as it is rather powerful.
One of the other really cool things we're starting to recognise the power and potential of is sending custom HTTP payloads (IE: A POST request to a specific API endpoint) and expecting a certain response from that API, within a certain time threshold.
Pretty cool, right?
Right now, we have a NodeJS + Docker release ready to go and allowing us to perform functional and load testing, however we're planning on tying its public release with our Mobile application, which is near completion.
It's most of the way there, with some minor functionality changes and something I felt rather critical about, the User Interface.
Here is the new design for the mobile app. Pretty sweet, eh?
Okay guys, that's it for now! Stay tuned for our next blog post, where we'll be talking about our distributed compute processing and our mobile application.