Google Cloud Dataflow provides a simple, powerful programming model for building both batch and streaming parallel data processing pipelines.
The Cloud Dataflow SDK is used to access the Google Cloud Dataflow service, which is currently in Alpha and restricted to whitelisted users.
The SDK is publicly available and can be used for local execution by anyone. Note, however, that the SDK is also an Alpha release and may change significantly over time. The SDK is built to be extensible and support additional execution environments ("runners") beyond local execution and the Google Cloud Dataflow service. As the product matures, we look forward to working with you to improve Cloud Dataflow.
The key concepts in this programming model are:
PCollection
: represents a collection of data, which could be bounded or unbounded in size.PTransform
: represents a computation that transforms input PCollections into output PCollections.Pipeline
: manages a directed acyclic graph of PTransforms and PCollections, which is ready for execution.PipelineRunner
: specifies where and how the pipeline should execute.
We provide three PipelineRunners:
- The
DirectPipelineRunner
runs the pipeline on your local machine. - The
DataflowPipelineRunner
submits the pipeline to the Dataflow Service, where it runs using managed resources in the Google Cloud Platform (GCP). - The
BlockingDataflowPipelineRunner
submits the pipeline to the Dataflow Service via theDataflowPipelineRunner
and then prints messages about the job status until the execution is complete.
The Dataflow Service is currently in the Alpha phase of development and access is limited to whitelisted users.
Additionally, in partnership with Cloudera, you can run Dataflow pipelines on an Apache Spark backend. The relevant runner code is hosted in this repository.
This repository consists of two modules:
SDK
module provides a set of basic Java APIs to program against.Examples
module provides a few samples to get started. We recommend starting with the WordCount example.
The following command will build both modules and install them in your local Maven repository:
mvn clean install
You can speed up the build and install process by using the following options:
-
To skip execution of the unit tests, run:
mvn install -DskipTests
-
While iterating on a specific module, use the following command to compile and reinstall it. For example, to reinstall the
examples
module, run:mvn install -pl examples
Be careful, however, as this command will use the most recently installed SDK from the local repository (or Maven Central) even if you have changed it locally.
-
To run Maven using multiple threads, run:
mvn -T 4 install
After building and installing, you can execute the WordCount
and other example
pipelines using the DirectPipelineRunner
on your local machine:
mvn exec:java -pl examples \
-Dexec.mainClass=com.google.cloud.dataflow.examples.WordCount \
-Dexec.args="--input=<INPUT FILE PATTERN> --output=<OUTPUT FILE>"
If you have been whitelisted for Alpha access to the Dataflow Service and
followed the developer setup
steps, you can use the BlockingDataflowPipelineRunner
to execute the
WordCount
example in the GCP. In this case, you specify your project name,
pipeline runner, and the staging location in
Google Cloud Storage (GCS), as follows:
mvn exec:java -pl examples \
-Dexec.mainClass=com.google.cloud.dataflow.examples.WordCount \
-Dexec.args="--project=<YOUR GCP PROJECT NAME> --stagingLocation=<YOUR GCS LOCATION> --runner=BlockingDataflowPipelineRunner"
GCS location should be entered in the form of
gs://bucket/path/to/staging/directory
. GCP project refers to its name (not
number), which has been whitelisted for Cloud Dataflow. Refer to
Google Cloud Platform for general instructions on
getting started with GCP.
Alternatively, you may choose to bundle all dependencies into a single JAR and
execute it outside of the Maven environment. For example, after building and
installing as usual, you can execute the following commands to create the
bundled JAR of the Examples
module and execute it both locally and in GCP:
mvn bundle:bundle -pl examples
java -cp examples/target/google-cloud-dataflow-java-examples-all-bundled-manual_build.jar \
com.google.cloud.dataflow.examples.WordCount \
--input=<INPUT FILE PATTERN> --output=<OUTPUT FILE>
java -cp examples/target/google-cloud-dataflow-java-examples-all-bundled-manual_build.jar \
com.google.cloud.dataflow.examples.WordCount \
--project=<YOUR GCP PROJECT NAME> --stagingLocation=<YOUR GCS LOCATION> --runner=BlockingDataflowPipelineRunner
Other examples can be run similarly by replacing the WordCount
class name with
BigQueryTornadoes
, DatastoreWordCount
, TfIdf
, TopWikipediaSessions
, etc.
and adjusting runtime options under the Dexec.args
parameter, as specified in
the example itself.
Note that when running Maven on Microsoft Windows platform, backslashes (\
)
under the Dexec.args
parameter should be escaped with another backslash. For
example, input file pattern of c:\*.txt
should be entered as c:\\*.txt
.
We welcome all usage-related questions on Stack Overflow
tagged with google-cloud-dataflow
.
Please use issue tracker on GitHub to report any bugs, comments or questions regarding SDK development.