-
Notifications
You must be signed in to change notification settings - Fork 4.4k
[YAML] A Streaming Inference Pipeline - Taxi Fare Estimation #35568
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Add streaming YAML pipeline for the taxi fare regression problem, as well as notebook for model training and deployment to Vertex AI
Complete the model training and deployment notebook. Complete README.md
Assigning reviewers: R: @tvalentyn for label python. Note: If you would like to opt out of this review, comment Available commands:
The PR bot will only process comments in the main thread (not review comments). |
CC @chamikaramj and @damccorm |
``` | ||
See also [here]( | ||
https://cloud.google.com/bigquery/docs/datasets) for more details on | ||
how to create BigQuery datasets |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how to create BigQuery datasets | |
how to create BigQuery datasets. |
export NUM_WORKERS="3" | ||
|
||
python -m apache_beam.yaml.main \ | ||
--yaml_pipeline_file transforms/ml/taxi-fare/streaming_sentiment_analysis.yaml \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
--yaml_pipeline_file transforms/ml/taxi-fare/streaming_sentiment_analysis.yaml \ | |
--yaml_pipeline_file transforms/ml/taxi-fare/streaming_taxifare_prediction.yaml \ |
"source": [ | ||
"## Training\n", | ||
"\n", | ||
"For a quick '0-to-1' model serving on Vertex AI, the model training process below is kept straighforward using the simple yet very effective [tree-based, gradient boosting](https://en.wikipedia.org/wiki/Gradient_boosting) algorithm. We start of with a simple feature engineering idea, before moving on to the actual training of the model using the [XGBoost](https://xgboost.readthedocs.io/en/stable/index.html) library.\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"For a quick '0-to-1' model serving on Vertex AI, the model training process below is kept straighforward using the simple yet very effective [tree-based, gradient boosting](https://en.wikipedia.org/wiki/Gradient_boosting) algorithm. We start of with a simple feature engineering idea, before moving on to the actual training of the model using the [XGBoost](https://xgboost.readthedocs.io/en/stable/index.html) library.\n" | |
"For a quick '0-to-1' model serving on Vertex AI, the model training process below is kept straighforward using the simple yet very effective [tree-based, gradient boosting](https://en.wikipedia.org/wiki/Gradient_boosting) algorithm. We start off with a simple feature engineering idea, before moving on to the actual training of the model using the [XGBoost](https://xgboost.readthedocs.io/en/stable/index.html) library.\n" |
"source": [ | ||
"### Simple Feature Engineering\n", | ||
"\n", | ||
"One of the columns in the dataset is the `pickup_datetime` column, which is of [datetimelike](https://pandas.pydata.org/docs/reference/api/pandas.Series.dt.html) type. This makes it incredibly easy for performing data analysis on time-series data such as this. However, ML models don't accept feature columns with such a custom data type that is not a number. Some sort of conversion is needed, and here we'll choose to break this datetime column into multiple feature columns.\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"One of the columns in the dataset is the `pickup_datetime` column, which is of [datetimelike](https://pandas.pydata.org/docs/reference/api/pandas.Series.dt.html) type. This makes it incredibly easy for performing data analysis on time-series data such as this. However, ML models don't accept feature columns with such a custom data type that is not a number. Some sort of conversion is needed, and here we'll choose to break this datetime column into multiple feature columns.\n" | |
"One of the columns in the dataset is the `pickup_datetime` column, which is of [datetime like](https://pandas.pydata.org/docs/reference/api/pandas.Series.dt.html) type. This makes it incredibly easy for performing data analysis on time-series data such as this. However, ML models don't accept feature columns with such a custom data type that is not a number. Some sort of conversion is needed, and here we'll choose to break this datetime column into multiple feature columns.\n" |
"\n", | ||
"Predicting taxi fare is a supervised learning, regression problem and our dataset is tabular. It is well-known in common literatures (_[1]_, _[2]_) that [gradient-boosted decision tree (GBDT) model](https://en.wikipedia.org/wiki/Gradient_boosting) performs very well for this kind of problem and dataset type.\n", | ||
"\n", | ||
"The input columns used for training (and subsequently for inference) will be the original feature columns (pick-up/drop-off longitude/latitude and the passenger count) from the dataset, along with the additional engineered features (pick-up year, month, day, etc...) that we generated above. The target/label column for training is the fare amount column.\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"The input columns used for training (and subsequently for inference) will be the original feature columns (pick-up/drop-off longitude/latitude and the passenger count) from the dataset, along with the additional engineered features (pick-up year, month, day, etc...) that we generated above. The target/label column for training is the fare amount column.\n" | |
"The input columns used for training (and subsequently for inference) will be the original feature columns (pick-up/drop-off, longitude/latitude, and the passenger count) from the dataset, along with the additional engineered features (pick-up year, month, day, etc...) that we generated above. The target/label column for training is the `fare_amount` column.\n" |
{ | ||
"cell_type": "markdown", | ||
"source": [ | ||
"Save the trained model to the Google Cloud Storage bucket as model artifact." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Save the trained model to the Google Cloud Storage bucket as model artifact." | |
"Save the trained model to the Google Cloud Storage bucket as a model artifact." |
under the License. | ||
--> | ||
|
||
## Streaming Taxi Fare Prediction Pipeline |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would note somewhere in this readme that this is not tested like all of the other examples - just to make sure its clear to the reader. Thanks.
Please add a meaningful description for your change here
Part of a larger effort #35069 and #35068 to add more examples involving Kafka and ML use cases.
Introduce a streaming inference pipeline for the regression problem of taxi fare estimation,
where the YAML pipeline reads from the public PubSub topic
projects/pubsub-public-data/topics/taxirides-realtime
and writes to a Kafka topic, then reads from the same Kafka topic and does some transformations before
the
RunInference
transform performs remote inference with Vertex AI model handler and a custom trainedXGBoost model deployed to a Vertex AI endpoint.
A notebook for training and deploying this XGBoost model is also included.
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123
), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>
instead.CHANGES.md
with noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.