Glad to have you. I am also in North Carolina (Charlotte) started learning
Apache Hadoop including Spark (also Flink - currently in incubation). Been
through a few paid programs and free online videos including Cloudera
examples on their website. If you need any assistance or recommendations of
online training let me know. I will do my best to assist with resources and
learning. I have paid for my voucher, CCA175, currently studying and hope
to be ready to take in 2 weeks (20 hours of studying I've been told).
I am currently working with PRGX as Technology Lead. I work on ETL tool Talend and Cloudera 5.7. I do mostly POC's to explore and find functionalities whether they suits for my organization. I am interested in Talend, Hadoop, Hive, Impala and Spark topics.
# mario amatucci
used to work as bi developer with focus on etl... these days working with think big (teradata) on custoemrs data lakes (mostly nifi and some spark support and admin tasks when needed too
My name is Monika Singh Chauhan, I work as a Firmware Developer. My company started to look at Big data stack for Analytics. I downloaded Cloudera and exploring it.
Currently I am working on how to run R script on Cloudera.
Excited to learn and understand how Cloudera works and how can we use it.
Mark Teehan in SAP, Singapore. My team conduct proof of concepts on data - both database (HANA) and big-data (SAP HANA VORA, Spark etc). This is run on a combination of customer clusters, internal clusters and docker clusters on laptops.
I have an experience with Linux/Unix operating systems and Virtualization VMWARE/KVM Hypervisors, I have worked for much BigData installations, especially my experience is with hardware vendors HPE, DELL, Fujitsu and Cisco x86 Hardware and Operation Systems, as I am quite new in Cloudera. I am planning to develop myself on Cloudera side and on the following days DevOps erea to automateon bear metal installations with Ansible. Please follow me on linkedin.
I'm a software engineer at Priceline.com, and I'm currently working on the development of a Java program that will be a replacement for an existing Java program, which is using "Hive streaming API" from Hortonworks to write ORC formatted data to Hdfs files.