According to SAE International, Microtransit is defined as a privately or publicly operated, technology-enabled transit service that typically uses multi-passenger or pooled shuttles or vans to provide on-demand or fixed-schedule services with either dynamic or fixed routing.
Micro-transit offers superior customer experience and improved efficiency of services with real-time supply and demand, dynamic pricing, tracking, cashless payments, customized seat location, and so on. This drives numerous transit agencies to offer micro-transit services in different capacities. Micro-transit transportation could be an alternative to otherwise public transport commuters, and the trend is likely to rise going forward.
Customer retention is high as long as the providers can ensure a seamless experience, which can be assured by the IT experts that provide software solutions to these companies. Leveraging solutions providers’ technology expertise, micro-transit companies can meet changing market requirements of shared urban mobility sustainably.
How we developed an on-demand micro-transit product empowered with a scalable, high-performance ride-matching and relay algorithm
Transportation enterprises providing on-demand taxi services constantly look for methods to increase operational efficiency by reducing the time required for a passenger (demand) to get the cab (supply) as well as for the cab drivers (supply) to engage in the next ride request (demand).
The main component of such an on-demand micro-transit solution is the Ride matching and relay algorithm, which identifies the cab for a specific ride request.
Objectives / Design considerations for the algorithm:
- The application should respond to ride requests in real-time to ensure that passenger spends minimal time on ride searches.
- It should handle the unpredictable peak-request-load for ride requests. The algorithm should auto-scale to handle all the requests and still provide the same performance as regular non-peak hours.
- The algorithm functions based on multiple configurable parameters like geospatial proximity, dry run distance, provision for a uniform opportunity, ride request rejections, average KPI ratings of drivers and passenger, variable pricing strategies, live location of a cab, etc.
- It should be able to integrate with online navigation services which consider congestion data. The algorithm should execute geospatial queries for the real-time calculations involving driver-vehicle location data.
- The algorithm should ensure that service is available 24*7
Why did we choose AWS Lambda with Python?
The application should be designed to handle a large number of concurrent ride requests during peak times without impacting its performance. With limited time to market, we started exploring cloud-based services that could work as a platform for the algorithm to run.
AWS Lambda was known to be highly scalable and could process high loads. Lambda has a serverless architecture, which means that you do not have to keep an instance of the server running at all times. The product was being developed in .NET for APIs, and since AWS Lambda supported .NET, it was the natural first choice.
We completed a proof of concept using AWS Lambda with .NET functions. However, during the proof of concept phase, it was found that there was a weakness in Lambda named ‘Cold Start’. We had to find a solution to manage cold starts.
What is a Cold Start?
Cold Start is the ‘start-up’ time required to get a serverless application’s environment running when it is initiated after a period of inactivity. Lambda applications run on ephemeral containers managed by AWS. AWS has its own complex algorithms to manage the infrastructure dynamically based on what we have subscribed to in the Lambda configuration. If Lambda services are not being invoked for a while, the containers managed by AWS Lambda shall shut down to save its valuable computational and memory resources. When the Lambda service is triggered again, resources must be allocated to it again, which results in latency.
When we tried the proof of concept with AWS Lambda services written in .NET, we noticed that the cold start time was in the range of 5-10 seconds. Our business APIs were written in .NET. GIS-based APIs were written in Python as it had excellent libraries to handle geospatial functions and data volume. Since AWS Lambda supported Python, we decided to try Lambda services written in Python.
Cold start for AWS Lambda with Python was noticed to be less than half a second, and hence we narrowed down on using AWS Lambda with Python services.
Designing the ride-matching and relay algorithm
Having settled on the technology to be used, we designed the Ride Matching and Relay Algorithm as follows:
- We decided to use DynamoDB in AWS to store and manage the ride requests. DynamoDB had very good read or write response times and is known for its performance along with the ability to auto-scale. The ride requests from passengers were written into an AWS DynamoDB table. Multiple ride requests may be raised concurrently by different passenger.
- Ride matching and relay algorithm picks up the ride requests from the DynamoDB for processing.
- The algorithm is designed with a three-level hierarchy of Lambdas to separate layers of responsibility to optimize the execution time. Each layer processes information and returns the result to the parent Lambda function:
- Level 1 Lambda – This Lambda service sits at the highest level. It reads the DynamoDB table for new ride requests at regular intervals. It groups a set of ride requests and spawns level 2 Lambda services.
- Level 2 Lambda – This Lambda service gets a set of ride requests as input. Its main function is to further spawn multiple Level 3 Lambda services for each ride request, accept the selected cab driver for the ride request, and write this into the DynamoDB for a ride request.
- Level 3 Lambda – This Lambda service gets a specific ride request ID as the input. The objective of this Lambda function is to identify the cabs that qualify for the ride request, score them, and find the highest-ranked cab to serve the ride request. This scoring is based on multiple parameters like distance, time, driver KPIs and many more.
- Based on the cab driver allocated against a ride request, the algorithm will invoke another Lambda service to send the ride request to the driver’s mobile app.
- In case the driver rejects or misses the ride request, the ride request is passed to the algorithm again.
- If the algorithm is not able to find any supply for the ride request after X seconds another Lambda service will remove all the expired requests.
Achieving the desired output & delighted customers
The application was developed and deployed into production and met all client expectations and market requirements, which in turn resulted in happy customers. Customers are able to find rides using the application with minimum waiting times, thanks to the ride relay algorithm. The response time of the algorithm was consistent across different loads. There were even scenarios where the peak volume of ride requests went much above the expected levels. No deterioration in performance was noticed and the system was able to cope up with the unanticipated levels of load.
In conclusion
Our ability to reap the benefits of on-demand micro-transit transit services will depend on companies’ creating superior products that support and meet dynamic market conditions. We at blackrock are proud to have developed and delivered a micro-transit on-demand product with high potential for innovation and scalability, reduced costs, ensuring tangible benefits for our client.
If you have a challenging idea for the mobility and transport sector and require an IT partner who’s equipped to help you make it happen, please feel free to get in touch with us at sales@blackrockdxb.com