Key Responsibilities :- Lead the design, implementation, and optimization of big data pipelines using GCP services such as BigQuery, Dataflow, Dataproc, and Pub/Sub.- Work with Hadoop ecosystem tools (Spark, MapReduce, HDFS, Hive, HBase) to process a...