Member since
11-24-2015
223
Posts
10
Kudos Received
0
Solutions
09-13-2018
03:00 PM
There are multiple ways to populate avro tables. 1) Insert into avro_table values(<col1>,<col2>..,<colN>) -- This way hive will write avro files. 2) Generating avro files & copying directly to '/tmp/staging', You can read avro documentation to write avro files directly into hdfs path. Avro Reader/Writer APIs will take care of storing & retrieving records, we don't need to explicitly specify delimiters for avro files.
... View more
07-17-2018
10:46 PM
@n c Please review this hcc link https://community.hortonworks.com/questions/57866/how-to-move-hive-and-associated-components-from-on.html Definitely the most important piece is to have a good database backup when hivemetastore is down. Then as outlined above move the mysql database (or used database) first. Then you can move the other components using ambari. And yes, webhcat is part of hive! HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
07-02-2018
07:24 AM
@n c Can you share you scheduler setting in YARN-->Configs-->Advanced-->Scheduler -->Capacity scheduler
... View more
04-19-2018
12:39 PM
@n c Did you above information helped you. If yes, Please accept answer.
... View more
04-19-2018
12:44 PM
@n cIf above information helped you, Could you please accept answer?
... View more
02-16-2018
02:57 AM
Here you go: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_command-line-installation/content/determine-hdp-memory-config.html
... View more
10-04-2017
07:58 PM
thanks so much thussain. worked fine.
... View more
09-16-2017
12:18 PM
Hi @n c, You are welcome! 🙂 . I don't think there's other object in hive (but not sure) there's the UDF for that you need to export the jar and you use for UDF in you first cluster. May I ask you to accept my answer? 🙂 Thanks! Michel
... View more
11-08-2017
02:20 PM
@n c It works after I set this in hive-site.xml I am not sure the reason, but it was forced to run mapreduce hive.compute.query.using.stats=false
... View more
06-12-2017
02:44 PM
HBase shell also includes an alternative command to `truncate` called `truncate_preserve` which can be used to drop data within a table but still maintain the metadata about the table contained in hbase:meta. You can also use the `status 'detailed'` command from `hbase shell` to see the split/presplit regions. For example: $ /opt/bin/hbshell.bash <<< "truncate_preserve 'dev06ostgarnerh:contact'" $ /opt/bin/hbshell.bash <<< "status 'detailed'" | grep contact
"dev06ostgarh:contact,600,1497277520170.a17b52eec2a9df78bbc175a5238a13f8."
"dev06ostgarh:contact,800,1497277520170.275ff20d4c4036163ee90a94092b47e6."
"dev06ostgarh:contact,A00,1497277520170.e94b5202f2cbb100dd74e858f772573e."
"dev06ostgarh:contact,E00,1497277520170.7d9f1b140431d2a975ee35e716b9337e."
"dev06ostgarh:contact,,1497277520170.ed4ad8c3fb40598bb51d2ad91dee6973."
"dev06ostgarh:contact,100,1497277520170.43378ea32c0b3bf64dd697b62b87b23c."
"dev06ostgarh:contact,200,1497277520170.be10a9e051b0f9a4f7c460c1b36969d7."
"dev06ostgarh:contact,F00,1497277520170.64f57d24d29058f238b1addb08c603a8."
"dev06ostgarh:contact,300,1497277520170.6244d798c326984e47a577f579645487."
"dev06ostgarh:contact,500,1497277520170.c9b1ed0633850eb5086a32049a5597d5."
"dev06ostgarh:contact,C00,1497277520170.bb446488db4bb788248f41551db878f5."
"dev06ostgarh:contact,D00,1497277520170.b2d2bc1febc9206df27e05a976066027."
"dev06ostgarh:contact,400,1497277520170.b61ffc5cdfda59f96b47d83f8aa46134."
"dev06ostgarh:contact,700,1497277520170.e4e1cab56f430344c71e24fa6673b55f."
"dev06ostgarh:contact,900,1497277520170.1cb1067053fe203426e0e9e2c00bb37f."
"dev06ostgarh:contact,B00,1497277520170.5867477ee49a19f6dafd0ff60f9b8f6f."
... View more