My name is Brant. I'm a researcher at Johns Hopkins. My background is in biomedical research with an emphasis on machine learning and natural language processing. I'm coming over from mostly large shared memory and MPI machines/clusters. I do have lots of Java programming experience but most of our applications aren't Java based. I've worked on Hadoop/Accumulo based applications before but am new to the dev ops/configuration/deployment aspects.
I'm currently trying to get my dockerized applications deployed in hadoop streaming on one of our clusters and eventually hoping to get longer running dockerized gpu applications running in slider. We're using docker because we can encapsulate our c++/python/etc depedencies with our programs.
I am mohan ,base location is Singapore and working with Singtel pvt limitted(Telecom company),recently joined this company as Big data architecture .Our company using cloudera 5.4 entrerprise edition .
Having 10+years experience and cloud era distribution working more than 2 years in sizing ,storage format,data analytics .
Hello! I'm very new to the Hadoop / Cloudera environment and hoping to learn much from this community. My career arc has been software developer, software architect, and enterprise architect working entirely in the agribusiness industry. I just started a new job and have been given the task of identifying big data use cases to drive value from our Cloudera environment. So, here we go!
Hi Cloudera community!
Happy to join your community!
I'm a sysadmin and who love my job and like to works on new technology. So, I'm on Cloudera now!
For some test, we create a cluster with 3 nodes in a labs. 1 node for Cloudera Manager, 1 node for NameNode and DataNode, and the last one as DataNode only.
It's a labs to discover the new version of Cloudera 5.5. So it's just to made some test on it, not to be in production!
We install these services: hdfs, hive, hue, impala, oozie, zookeeper, Mapreduce2 (Yarn), Sqoop1.
One our developers, try to import some data into Hive, but we got an error.
Here the command line use by our developers:
sqoop import --connect jdbc:mysql://our.database.url/database --username user --password passwordtest --table table_product --target-dir /path/to/db --split-by product_id --hive-import --hive-overwrite --hive-table table_product
The command start successfully, we see the mapper do the job to 100 but when the job finish, we have an error:
6/02/12 15:37:57 WARN hive.TableDefWriter: Column last_updated had to be cast to a less precise type in Hive
16/02/12 15:37:57 INFO hive.HiveImport: Loading uploaded data into Hive
16/02/12 15:37:57 ERROR hive.HiveConfig: Could not load org.apache.hadoop.hive.conf.HiveConf. Make sure HIVE_CONF_DIR is set correctly.
16/02/12 15:37:57 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: java.lang.ClassNotFoundException: org.apache.hadoop.hive.conf.HiveConf
I do some search in configuration file about the HIVE_CONF_DIR and doesn't find something weird.
I don't find a solution about and I block on it... So our developer can't continue his test.
I search in Cloudera Manager configuration too.
Have you an idea about that ? I do some search on web with no success.
Thanks a lot for your help!
I am the latest entrant to this community. I am from mainframe world and have been working in the black world for last 12 years. I am currently working for HSBC for a Core Banking project and am learning Hadoop as a part of my passion.
With all the intellectuals like you guys around, I am sure it wil be an easy ride for me whenever I am stuck.
Let me introduce myself. I have 14 years of experience. 11 years into IT and initial 3 years into teaching in university and technical instituations. My IT experience mostly speard into java/j2ee technology stack. Since last 2 years I have created an interest in Hadoop and its ecosystem. I had been going through different topics on and off to learn this new technology. But in last 6 to 9 months I have raised a strong passion towards Hadoop ecosystem. The more I am learning I am becoming more passionate. But now the hadoop fever has gone to that extent where I am planning to quit my regular job and become a serious Hadoop developper/consultant.
I am leaving my current company in 2 weeks. After that I will go for CCA 175 cloudera certification. Hope that helps in kick starting my full time career in Data Science.