Skip to content

Conversation

charlespnh
Copy link
Contributor

Please add a meaningful description for your change here

Part of a larger effort #35069 and #35068 to add more examples involving Kafka and ML use cases.

Introduce a streaming inference pipeline for the regression problem of taxi fare estimation,
where the YAML pipeline reads from the public PubSub topic projects/pubsub-public-data/topics/taxirides-realtime
and writes to a Kafka topic, then reads from the same Kafka topic and does some transformations before
the RunInference transform performs remote inference with Vertex AI model handler and a custom trained
XGBoost model deployed to a Vertex AI endpoint.
A notebook for training and deploying this XGBoost model is also included.


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests
Go tests

See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.

@charlespnh charlespnh marked this pull request as ready for review July 14, 2025 16:06
Copy link
Contributor

Assigning reviewers:

R: @tvalentyn for label python.

Note: If you would like to opt out of this review, comment assign to next reviewer.

Available commands:

  • stop reviewer notifications - opt out of the automated review tooling
  • remind me after tests pass - tag the comment author after tests pass
  • waiting on author - shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)

The PR bot will only process comments in the main thread (not review comments).

@charlespnh
Copy link
Contributor Author

CC @chamikaramj and @damccorm

@chamikaramj chamikaramj requested a review from derrickaw July 15, 2025 18:09
```
See also [here](
https://cloud.google.com/bigquery/docs/datasets) for more details on
how to create BigQuery datasets
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
how to create BigQuery datasets
how to create BigQuery datasets.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

export NUM_WORKERS="3"

python -m apache_beam.yaml.main \
--yaml_pipeline_file transforms/ml/taxi-fare/streaming_sentiment_analysis.yaml \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
--yaml_pipeline_file transforms/ml/taxi-fare/streaming_sentiment_analysis.yaml \
--yaml_pipeline_file transforms/ml/taxi-fare/streaming_taxifare_prediction.yaml \

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops this is terrible... Fixed.

"source": [
"## Training\n",
"\n",
"For a quick '0-to-1' model serving on Vertex AI, the model training process below is kept straighforward using the simple yet very effective [tree-based, gradient boosting](https://en.wikipedia.org/wiki/Gradient_boosting) algorithm. We start of with a simple feature engineering idea, before moving on to the actual training of the model using the [XGBoost](https://xgboost.readthedocs.io/en/stable/index.html) library.\n"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"For a quick '0-to-1' model serving on Vertex AI, the model training process below is kept straighforward using the simple yet very effective [tree-based, gradient boosting](https://en.wikipedia.org/wiki/Gradient_boosting) algorithm. We start of with a simple feature engineering idea, before moving on to the actual training of the model using the [XGBoost](https://xgboost.readthedocs.io/en/stable/index.html) library.\n"
"For a quick '0-to-1' model serving on Vertex AI, the model training process below is kept straighforward using the simple yet very effective [tree-based, gradient boosting](https://en.wikipedia.org/wiki/Gradient_boosting) algorithm. We start off with a simple feature engineering idea, before moving on to the actual training of the model using the [XGBoost](https://xgboost.readthedocs.io/en/stable/index.html) library.\n"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

"source": [
"### Simple Feature Engineering\n",
"\n",
"One of the columns in the dataset is the `pickup_datetime` column, which is of [datetimelike](https://pandas.pydata.org/docs/reference/api/pandas.Series.dt.html) type. This makes it incredibly easy for performing data analysis on time-series data such as this. However, ML models don't accept feature columns with such a custom data type that is not a number. Some sort of conversion is needed, and here we'll choose to break this datetime column into multiple feature columns.\n"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"One of the columns in the dataset is the `pickup_datetime` column, which is of [datetimelike](https://pandas.pydata.org/docs/reference/api/pandas.Series.dt.html) type. This makes it incredibly easy for performing data analysis on time-series data such as this. However, ML models don't accept feature columns with such a custom data type that is not a number. Some sort of conversion is needed, and here we'll choose to break this datetime column into multiple feature columns.\n"
"One of the columns in the dataset is the `pickup_datetime` column, which is of [datetime like](https://pandas.pydata.org/docs/reference/api/pandas.Series.dt.html) type. This makes it incredibly easy for performing data analysis on time-series data such as this. However, ML models don't accept feature columns with such a custom data type that is not a number. Some sort of conversion is needed, and here we'll choose to break this datetime column into multiple feature columns.\n"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

"\n",
"Predicting taxi fare is a supervised learning, regression problem and our dataset is tabular. It is well-known in common literatures (_[1]_, _[2]_) that [gradient-boosted decision tree (GBDT) model](https://en.wikipedia.org/wiki/Gradient_boosting) performs very well for this kind of problem and dataset type.\n",
"\n",
"The input columns used for training (and subsequently for inference) will be the original feature columns (pick-up/drop-off longitude/latitude and the passenger count) from the dataset, along with the additional engineered features (pick-up year, month, day, etc...) that we generated above. The target/label column for training is the fare amount column.\n"
Copy link
Collaborator

@derrickaw derrickaw Jul 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"The input columns used for training (and subsequently for inference) will be the original feature columns (pick-up/drop-off longitude/latitude and the passenger count) from the dataset, along with the additional engineered features (pick-up year, month, day, etc...) that we generated above. The target/label column for training is the fare amount column.\n"
"The input columns used for training (and subsequently for inference) will be the original feature columns (pick-up/drop-off, longitude/latitude, and the passenger count) from the dataset, along with the additional engineered features (pick-up year, month, day, etc...) that we generated above. The target/label column for training is the `fare_amount` column.\n"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

{
"cell_type": "markdown",
"source": [
"Save the trained model to the Google Cloud Storage bucket as model artifact."
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"Save the trained model to the Google Cloud Storage bucket as model artifact."
"Save the trained model to the Google Cloud Storage bucket as a model artifact."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

under the License.
-->

## Streaming Taxi Fare Prediction Pipeline
Copy link
Collaborator

@derrickaw derrickaw Jul 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would note somewhere in this readme that this is not tested like all of the other examples - just to make sure its clear to the reader. Thanks.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did intend to add test for the pipeline, but came across a few issues and it was taking some time... Anyway, unit test is added.

@damccorm
Copy link
Contributor

next action author

Add streaming YAML pipeline for the taxi fare regression problem,
as well as notebook for model training and deployment to Vertex AI
Complete the model training and deployment notebook.
Complete README.md
@charlespnh charlespnh force-pushed the yaml-taxi-fare-inference branch from 8df45a9 to be28050 Compare July 29, 2025 16:03
@charlespnh charlespnh requested a review from derrickaw July 29, 2025 18:08
@charlespnh
Copy link
Contributor Author

PreCommit Prism Python job is failing at this test PrismRunnerTest.test_pardo_state_with_custom_key_coder. Seems unrelated to this PR change...

consumer_config):
topic: Optional[str] = None,
format: Optional[str] = None,
schem: Optional[Any] = None,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
schem: Optional[Any] = None,
schema: Optional[Any] = None,

This PTransform simulates the behavior of the ReadFromKafka transform by
reading from predefined in-memory data based on the Kafka topic argument.
Args:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

needs schema arg doc string

@chamikaramj
Copy link
Contributor

I believe end-to-end testing of this is blocked by #35715

@charlespnh
Copy link
Contributor Author

charlespnh commented Jul 29, 2025

I believe end-to-end testing of this is blocked by #35715

No it's not. This pipeline doesn't make use of MLTransform. Also, this has been tested e2e with the following Dataflow job - https://console.cloud.google.com/dataflow/jobs/us-central1/2025-07-18_07_16_21-10544402510837439480;bottomTab=WORKER_LOGS;expandBottomPanel=false;graphView=0;logsSeverity=INFO;mainTab=JOB_GRAPH;step=?pageState=(%22dfTime%22:(%22s%22:%222025-07-18T14:16:22.035Z%22,%22e%22:%222025-07-18T15:44:38.618Z%22))&project=apache-beam-testing

Sorry if there's any confusion... I was referring to the other pipeline I'm still working on that is blocked by #35715

@charlespnh
Copy link
Contributor Author

Don't think these jobs failing are related to this PR change. They just passed in the previous commit.

@chamikaramj
Copy link
Contributor

Ah, sorry about the confusion. Lemme rerun the tests and we can merge when they pass.

@charlespnh
Copy link
Contributor Author

charlespnh commented Jul 31, 2025

I'll rebase with the latest HEAD and trigger the tests again. Looks like there were some recent changes to beam_PreCommit_Python_ML workflow (reverted) that is causing this issue.

@chamikaramj
Copy link
Contributor

ML tests are still failing but the error doesn't seem to be related.

ERROR: module or package not found: require_docker_in_docker (missing __init__.py?)

I'll go ahead and merge.

@chamikaramj chamikaramj merged commit 1aa3592 into apache:master Jul 31, 2025
87 of 103 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants