Member since
07-19-2018
613
Posts
100
Kudos Received
117
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3688 | 01-11-2021 05:54 AM | |
2577 | 01-11-2021 05:52 AM | |
6964 | 01-08-2021 05:23 AM | |
6407 | 01-04-2021 04:08 AM | |
29440 | 12-18-2020 05:42 AM |
08-16-2018
01:54 PM
You need to run these commands with a user that has access/permissions to hadoop, make sure you run the check-env.sh to resolve any access and environment issues. Then the sample command should work just fine.
... View more
08-15-2018
02:39 PM
@seninus glad you got it working. Please click accept on the main answer please, it helps close the question and gives me some reputation points. ;O)
... View more
08-14-2018
06:35 PM
now you just need to do the copy from hdfs to local filesystem: sudo su - hdfs -c "hdfs dfs -copyToLocal /datasets /tmp" mv /tmp/datasets /home/maria_dev This last command because hdfs user cant write to /home/maria_dev. So write hdfs to tmp, move from /tmp to /home/maria_dev
... View more
08-14-2018
05:39 PM
1 Kudo
additionally that sandbox probably has a local (to the mac) folder path mounted on the sandbox file system. You would need to use that path to get files from your mac to the sandbox, then sandbox to hdfs
... View more
08-14-2018
05:38 PM
@seninus those hdfs commands are commands to execute using terminal prompt on the sandbox node, not your mac... copy LOCAL is from sandbox node to hdfs, NOT your local mac
... View more
08-14-2018
04:56 PM
@seninus The Copy To and Copy From HDFS syntax is as follows: hdfs dfs -copyFromLocal /local/folder/file.txt /hdfs/folder/ hdfs dfs -copyToLocal /hdfs/folder/file2.txt /local/folder/ where /local/ is the local file system and /hdfs/ is the hdfs file system It is also important to note, you want to execute those commands as hdfs user so: sudo su - hdfs or sudo su - hdfs -c "hdfs dfs -copyFromLocal /local/folder/file.txt /hdfs/folder/" If this answer is helpful please choose ACCEPT.
... View more
08-14-2018
11:47 AM
@subhash parise click on Comment to reply In-Line, versus making a new answer... Some of your settings which above over 100 GB are concerning. I would recommend starting with much smaller settings: 2gb, 4gb, 8gb, 16gb, 32gb etc. Find a working combination then experiment with increasing attributes one at time especially with over 100 gb settings. My settings are: hive.tez.container.size: 13472 MB Number of nodes used by Hive's LLAP: 2 Memory Per Daemon: 43520 In-Memory cache Per daemon: 2560 Number of executors per LLAP Daemon: 10 Hive Tez Container Size: 4096 Number containers held: 1 Hiver Server Interactive Heap Size: 2048 MB LLAP Daemon Container max Headroom: 12288 MB LLAP Daemon Heap Size: 32768 MB Slider AM container size: 2560 MB With 100s of GB of ram available in your cluster, you should be able to get to 10 Nodes used by LLAP (10 with smaller settings - versus 1 with huge settings), but start at 2, get them working, and build up from there.
... View more
08-13-2018
01:21 PM
@subhash parise There are quite a few errors above of missing settings. I would confirm that you have completed the Hive LLAP setup and try to restart all again. Some slight modifications may be necessary to fit your specific cluster, but the following links may be helpful: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_command-line-installation/content/install_hive_llap.html https://community.hortonworks.com/articles/114636/how-to-enable-hive-llap-for-interactive-sql-on-had.html Hive LLAP can be very tricky so be sure to follow all of the steps and test restarting LLAP a few times. If this answer is helpful please choose ACCEPT.
... View more
08-13-2018
01:08 PM
1 Kudo
@Hariprasanth Madhavan We need more information to provide a more specific answer to your question. That said, I would recommend that you first ensure the TCP port is operational from the server running NiFi. I would do this by accessing command line on the NiFi node and testing that it has access to the port and can see new TCP data in your terminal. Next you can confirm the Local Network Interface and Port is properly configured in ListenTCP Processor. Now run the processor and tail the nifi log for any additional warning or errors. If this answer is helpful please choose ACCEPT.
... View more
08-13-2018
01:01 PM
@Parth
Karkhanis
Your PutHDFS file should add an attribute to the flow file called ${hive.ddl}. You should be able to use this DDL statement to create the hive table. Work with it manually until you get the syntax correct. In my working example, I send PutHdfs to ReplaceText where I append ${hive.ddl} with ${hive.ddl} LOCATION '/user/nifi/${folderName}/${tableName}/' tblproperties ("orc.compress" = "SNAPPY") Then I send that to PutHiveQL Processor. If this answer helps, please choose ACCEPT.
... View more