Member since
05-16-2016
785
Posts
114
Kudos Received
39
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2330 | 06-12-2019 09:27 AM | |
| 3593 | 05-27-2019 08:29 AM | |
| 5731 | 05-27-2018 08:49 AM | |
| 5254 | 05-05-2018 10:47 PM | |
| 3117 | 05-05-2018 07:32 AM |
07-18-2017
08:11 AM
@darkdante I dont have the ubuntu on my test machine but I am na tell you anyways do the same in Ubuntu /etc/hosts 192.168.200.11 Master In the Master node The /etc/hostname file should contain Master
... View more
07-18-2017
07:51 AM
You need to chang the /etc/sysconfig/network file each node accordinly for example Node 1 /etc/sysconfig/network on node 1
NETWORKING=yes
HOSTNAME=node1
NETWORKING_IPV6=no Restart the network and you should be able to fix the error Let me know if that helps
... View more
07-18-2017
07:42 AM
Whats the status of cloudera-scm-server,Cloudera-scm-server-db,Cloudera-scm-agent ? I assume you are using Path A - installer.bin script ? right did you try loggin in cloudera manager web ui like http://localhost:7180 or http://your_hostname:7180. whats ur kernel version you are ruining ?
... View more
07-18-2017
07:33 AM
1 Kudo
Why is you cluster has "red" I am suspecting some disk space I am only guessing just run the host health check also whats the parameter you had put in yarn.nodemanager.resource.memory-mb yarn.scheduler.minimum-allocation-mb< mapreduce.map.memory.mb mapreduce.reduce.memory.mb
... View more
07-17-2017
10:45 PM
go inside the IMPALA -SHELL excute impala-shell in the terminal and then once you hit the terminal . fire the command inside the shell [localhost:21000] > set mem_limit=6g; let me know if that works for you
... View more
07-17-2017
09:54 PM
the log says track the job http://pc1.localdomain.com:8088/proxy/application_1500089331244_0005/ what you see ? meantime Check the Resourcemanager log and let me know Were you able to perform Step 4 Also what is this ? why set it as root In Yarn settings, I have set root, and default min and max cores to be 1 and 4, and min /max memory to be 1 and 4 Gb
... View more
07-17-2017
06:49 PM
1 Kudo
This is bad - Remove the folder from hdfs sudo -u hdfs hadoop fs -mkdir /user/root
sudo -u hdfs hadoop fs -chown root /user/root
Could You Follow the below steps Login as Root Step 1 . Create user normal user Login as root in terminal 1 . sudo useradd hduser
2 change the password for hduser using passwd
#Check if hduser user exists by performing id
id hduser -> you should get result like uid=493(hdfs) gid=489(hdfs) groups=489(hdfs),492(hadoop)
id mapred
if so then add the user to group mapred and hdfs
usermod -a -G mapred hduser
usermod -a -G hdfs hduser
once everything is done. Step2 Login in your OS terminal as hduser using su - hduser Step3 sudo -u hdfs hadoop fs -mkdir /user/hduser sudo -u hdfs hadoop fs -chown -R hduser /user/hduser sudo -u hdfs hadoop fs -chmod -R 777 /user/hduser note - 777 permission bad but since it is a test let us do em. Step4 sqoop list-databases \ --connect jdbc:mysql://localhost \ --username name --password Youpassword Step 5 : Perform the same for import. Let me know if that is suffice
... View more
07-16-2017
06:36 PM
Based on the Error I assume you are firing your sqoop command as "root user " . ERROR orm.CompilationManager: Could not make directory: /root/ Try firing the same sqoop command with some non-root user make sure you give all the necessary permission for him to write / read files in hdfs . something like sudo addgroup hadoop
sudo adduser --ingroup hadoop hduser
sudo usermod -a -G hdfs hduser
... View more
07-15-2017
09:38 PM
@Msdhan You Welcome :))
... View more
07-15-2017
08:03 PM
1 Kudo
Impala does not have a concept of PK .However You have two options down the road if you want to implement delete single row you cant perform them on Hive / Impala . So you can implement using Impala-kudu format . Kudu format you can create table with primary key , plus you perform single row delete. or the hard way to achive this is to STEP 1 CREATE TABLE Sample
(
name STRING,
street STRING,
RD123 Timestamp ,(Assume this is unique since we dont have Pk)
)
STEP 2
Perform the LOAD DATA INTO Sample
STEP 3 - Create another table
Create table sample_no_dupli AS select SELECT col1,col2,MAX(RD123) AS createdate FROM JLT_STAHING
GROUP BY name,street
... View more