<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Need Help Clarifying a Real CCA175 Scenario in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Need-Help-Clarifying-a-Real-CCA175-Scenario/m-p/413523#M254110</link>
    <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/138093"&gt;@MarlinGomez&lt;/a&gt;&amp;nbsp;For that CCA175 streaming scenario with inconsistent formats, cleansing/transforming to HDFS, better to go with Spark Structured Streaming + schema evolution as the most exam-realistic pick.&lt;BR /&gt;It handles real-time ingestion efficiently via micro-batches, infers/evolves schemas on the fly (especially with JSON/Avro), and lets you apply transformations like filter/map before writing Parquet to HDFS.&lt;BR /&gt;Separate ETL pipelines per format add too much complexity/overhead for exam constraints, and pure schema-on-read skips proactive cleansing.&lt;BR /&gt;​QuickStart with Kafka source, schema merging enabled: .option("mergeSchema", "true").writeStream... to HDFS.This nails the "perform ETL on data using Spark API" objective perfectly. Good luck on your prep.&lt;BR /&gt;​&lt;/P&gt;</description>
    <pubDate>Sun, 08 Feb 2026 08:10:08 GMT</pubDate>
    <dc:creator>RAGHUY</dc:creator>
    <dc:date>2026-02-08T08:10:08Z</dc:date>
    <item>
      <title>Need Help Clarifying a Real CCA175 Scenario</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Need-Help-Clarifying-a-Real-CCA175-Scenario/m-p/413059#M253823</link>
      <description>&lt;P&gt;Hey everyone, I’m currently preparing for the CCA175 (Cloudera Data Engineer) exam and focusing heavily on real, hands-on scenario challenges to strengthen my understanding. So far, I’ve practiced with various ingestion and transformation pipelines, but I’m stuck on one scenario that feels very close to what the actual exam might present. Midway through my study plan, I started using &lt;STRONG&gt;Certs Matrix&lt;/STRONG&gt;, which has helped me evaluate different approaches to solving Spark and Hadoop workflow problems under pressure. The scenario I’m trying to clarify is this: If you receive streaming data in inconsistent formats and need to cleanse, transform, and store it efficiently in HDFS, which approach would be most exam-accurate using Spark Structured Streaming with schema evolution, designing separate ETL pipelines for each input format, or relying on a unified schema-on-read strategy? I’d really appreciate insights from anyone who has taken CCA175 or handled similar real-world pipelines. Your guidance would help me refine my preparation.&lt;/P&gt;</description>
      <pubDate>Mon, 08 Dec 2025 22:28:11 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Need-Help-Clarifying-a-Real-CCA175-Scenario/m-p/413059#M253823</guid>
      <dc:creator>MarlinGomez</dc:creator>
      <dc:date>2025-12-08T22:28:11Z</dc:date>
    </item>
    <item>
      <title>Re: Need Help Clarifying a Real CCA175 Scenario</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Need-Help-Clarifying-a-Real-CCA175-Scenario/m-p/413523#M254110</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/138093"&gt;@MarlinGomez&lt;/a&gt;&amp;nbsp;For that CCA175 streaming scenario with inconsistent formats, cleansing/transforming to HDFS, better to go with Spark Structured Streaming + schema evolution as the most exam-realistic pick.&lt;BR /&gt;It handles real-time ingestion efficiently via micro-batches, infers/evolves schemas on the fly (especially with JSON/Avro), and lets you apply transformations like filter/map before writing Parquet to HDFS.&lt;BR /&gt;Separate ETL pipelines per format add too much complexity/overhead for exam constraints, and pure schema-on-read skips proactive cleansing.&lt;BR /&gt;​QuickStart with Kafka source, schema merging enabled: .option("mergeSchema", "true").writeStream... to HDFS.This nails the "perform ETL on data using Spark API" objective perfectly. Good luck on your prep.&lt;BR /&gt;​&lt;/P&gt;</description>
      <pubDate>Sun, 08 Feb 2026 08:10:08 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Need-Help-Clarifying-a-Real-CCA175-Scenario/m-p/413523#M254110</guid>
      <dc:creator>RAGHUY</dc:creator>
      <dc:date>2026-02-08T08:10:08Z</dc:date>
    </item>
  </channel>
</rss>

