This is the thread for us all to get to know each other a little better. Please reply to this topic with a quick introduction of yourself. Include details like your background, what industry and job role you are in, what brings you to this community, what projects you are working on or planning, and any other interesting facts about you.
We encourage all our members to participate in this thread. It can be a great place to make connections with other professionals, get ideas for new projects, or just enjoy a little downtime conversing with other big data practitioners.
I have one more query. Since i am from Data warehousing background with very less knowledge of Java, so i am finding it difficult to write map-reduce jobs in java.
Do they check java programming for MR jobs in this exam ?
For spark - Can you please suggest any online MOOC training/book to start it ?
Cloudera Developer training are out of budget for me.
While it's always recommended to have a good knowledge of Java programming because MR low level programming will become easy for you.
You can code MR in other programming languages such as Python, however, this will add an extra layer of interpreter which is not very helpful for performance.
On CCA175 certification front, your Java programming expertise is not evaluated. You are provided selection option for tool sets based on which you can answer the certification questions.
Hope this helps.
I am working as an Hadoop Administrator for the past 2.5 years, Basically i am a Teradata Administrator. one of our important client request for BigData solution for processing more than 100 Million CSV records, so i entered into Big data world (REALLY BIG WORLD) and I am still learning like student 🙂
Once we are entered into BIG data world we can't able to skip the Hadoop. so i am working for past 2.5 years in hadoop tools (Hue, Hive, HBase, PIG, Map-Reduce, Sqoop,kafka,Storm etc...).
Initially i am really struggled to survive in hadoop environment (New tools including JAVA) but now i am really happy to here (community) i am learning as well as waiting for the big future too.
I am very passionate to play cricket.
To introduce myself:
have over 18 years of experience In Information Architecture, Data/Data warehouse Architect (OLTP and OLAP), Data modeling and governance, Business Intelligence design, Database Design, Database development and performance tuning, Operations & Infrastructure management, Technical Project Management
and executing POC for performance benchmarking in multi tier and multi Terabyte Data warehouse.
Experience on establishing Direction and data governance of major change program in their use of data.
I am really excited to be part of Clodera community and is privillege to interact with so many talented proessionals here. i am quite new to this group and can't wait to take deep dive into Big Data and Hadoop world and am sure to have lot of fun in future.
I love travlling, Walking, playing cricket and have passion for the community service and am associated with Non profit organization
for more than 7 years.
You can ind me in linkedIn: http://www.linkedin.com/pub/deepak-lal/9/696/720
My name is Brant. I'm a researcher at Johns Hopkins. My background is in biomedical research with an emphasis on machine learning and natural language processing. I'm coming over from mostly large shared memory and MPI machines/clusters. I do have lots of Java programming experience but most of our applications aren't Java based. I've worked on Hadoop/Accumulo based applications before but am new to the dev ops/configuration/deployment aspects.
I'm currently trying to get my dockerized applications deployed in hadoop streaming on one of our clusters and eventually hoping to get longer running dockerized gpu applications running in slider. We're using docker because we can encapsulate our c++/python/etc depedencies with our programs.
I am mohan ,base location is Singapore and working with Singtel pvt limitted(Telecom company),recently joined this company as Big data architecture .Our company using cloudera 5.4 entrerprise edition .
Having 10+years experience and cloud era distribution working more than 2 years in sizing ,storage format,data analytics .
Hello! I'm very new to the Hadoop / Cloudera environment and hoping to learn much from this community. My career arc has been software developer, software architect, and enterprise architect working entirely in the agribusiness industry. I just started a new job and have been given the task of identifying big data use cases to drive value from our Cloudera environment. So, here we go!
Hi Cloudera community!
Happy to join your community!
I'm a sysadmin and who love my job and like to works on new technology. So, I'm on Cloudera now!
For some test, we create a cluster with 3 nodes in a labs. 1 node for Cloudera Manager, 1 node for NameNode and DataNode, and the last one as DataNode only.
It's a labs to discover the new version of Cloudera 5.5. So it's just to made some test on it, not to be in production!
We install these services: hdfs, hive, hue, impala, oozie, zookeeper, Mapreduce2 (Yarn), Sqoop1.
One our developers, try to import some data into Hive, but we got an error.
Here the command line use by our developers:
sqoop import --connect jdbc:mysql://our.database.url/database --username user --password passwordtest --table table_product --target-dir /path/to/db --split-by product_id --hive-import --hive-overwrite --hive-table table_product
The command start successfully, we see the mapper do the job to 100 but when the job finish, we have an error:
6/02/12 15:37:57 WARN hive.TableDefWriter: Column last_updated had to be cast to a less precise type in Hive
16/02/12 15:37:57 INFO hive.HiveImport: Loading uploaded data into Hive
16/02/12 15:37:57 ERROR hive.HiveConfig: Could not load org.apache.hadoop.hive.conf.HiveConf. Make sure HIVE_CONF_DIR is set correctly.
16/02/12 15:37:57 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: java.lang.ClassNotFoundException: org.apache.hadoop.hive.conf.HiveConf
I do some search in configuration file about the HIVE_CONF_DIR and doesn't find something weird.
I don't find a solution about and I block on it... So our developer can't continue his test.
I search in Cloudera Manager configuration too.
Have you an idea about that ? I do some search on web with no success.
Thanks a lot for your help!
I am the latest entrant to this community. I am from mainframe world and have been working in the black world for last 12 years. I am currently working for HSBC for a Core Banking project and am learning Hadoop as a part of my passion.
With all the intellectuals like you guys around, I am sure it wil be an easy ride for me whenever I am stuck.
Let me introduce myself. I have 14 years of experience. 11 years into IT and initial 3 years into teaching in university and technical instituations. My IT experience mostly speard into java/j2ee technology stack. Since last 2 years I have created an interest in Hadoop and its ecosystem. I had been going through different topics on and off to learn this new technology. But in last 6 to 9 months I have raised a strong passion towards Hadoop ecosystem. The more I am learning I am becoming more passionate. But now the hadoop fever has gone to that extent where I am planning to quit my regular job and become a serious Hadoop developper/consultant.
I am leaving my current company in 2 weeks. After that I will go for CCA 175 cloudera certification. Hope that helps in kick starting my full time career in Data Science.
My name is Hugh Jamieson and I am the Hadoop Principle Engineer at OCLC.org, a non-profit org that serves libraries around the world. We are heavily invested in big data to support the data processes our community needs, processes that previously used to take months to complete! We manage library information at all levels and support sharing of resources at cloud scale. We have a considerable investment in HBase and are true believers.
Like many organizations, we are struggling with the appetite our org has for big data and the velocity at which new projects arrive. We have many clusters arranged around geographical and organizational boundaries, and managing these little beauties is a real chore. We are looking for ways to improve our velocity getting new features and tools into production. We have some roll-your-own utilities for cluster management that simply do not scale. So, we are looking at CM in the hope that it will help us speed up our deployments and make them much less complex.
Like other big data fans, I have no life and seldom see the sun. JK; I love my job. I stumbled upon Hadoop 5 years ago and was completely hooked on its design and capabilities. I have travelled thru many environments, from mainframe to SGI, and I can say I am a total Hadoop fan-boy. Such a nerd.
In my spare time I like to evangelize Scala and Spark, Streaming, performance, reactive programming, and immutability. My favorite color is blue. I have my own cluster in my basement. Yeah, I'm that sad.
This is Arun, and I have around 12 years of experience in IT primarily as a developer and a bit of DevOps as well.
I have worked on Cloudera, Hortonworks, MapR platforms as a DevOps engineer.
I am happy to join this community and leverage your knowledge to work around the technical issues that I come across and as well as answer any technical questions based on my knowledge and experience with Hadoop platforms.
Myself a BI Consultant worked extensively in BI reporting, Analytics, ETL of SAP/Oracle and Tableau. I have hands on experience in Cassandra and Hadoop (RDBMS - SQOOP - Hadoop) and would like to get certified in Hadoop sooner, primarily looking for CCP Data Engineer certification.
During spare time i like to read books (biographies, philosophies) and also an avid photographer.
I am a member of many forums like SAP-SCN, Oracle, Stack Overflow, Business Objects, Cassandra etc.
Like other forums i hope i can learn and contribute freely.