Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 609 | 06-04-2025 11:36 PM | |
| 1173 | 03-23-2025 05:23 AM | |
| 579 | 03-17-2025 10:18 AM | |
| 2182 | 03-05-2025 01:34 PM | |
| 1373 | 03-03-2025 01:09 PM |
08-28-2017
11:27 AM
@ANSARI FAHEEM AHMED Here is a smooth method of giving Centos 7 user root privileges link
... View more
08-28-2017
10:12 AM
@Ashnee Sharma The writing and reading should use the same version of parquet-hadoop.Use the newer version of parquet-hadoop for writing parquet files
Also check the permission on the parquet-0.log.lock
... View more
08-28-2017
10:02 AM
@yu clyonce Nice to know my solution helped. Maybe you have accept the answer and hence reward me.
... View more
08-28-2017
08:26 AM
@Kasim Shaik It doesn't matter whether you used tarball and blueprint which I sent you. After the installation how are you managing your cluster ? I guess by Ambari not so ? Just check how many directories are in Ambari UI-->HDFS-->Configs-->NameNode directories
... View more
08-28-2017
08:11 AM
@Kasim Shaik Can you check paste the screenshot of the below directories Ambari UI-->HDFS-->Configs-->NammeNode directories If you have ONLY one directory path then that explains why you have only one copy
... View more
08-28-2017
08:04 AM
@pooja shrivastava As stated by @Jay SenSharma ( GB isn't enough for both the HDP components and the OS /virtualbox. You will need at least 12 GB but 16 GB would be great to have and depending on your laptop again don't forget to check the BIO and enable virtualization otherwise your sandbox will NEVER boot up !
... View more
08-28-2017
08:00 AM
@Ashnee Sharma The underlying reason for this seems to that this file was written using different version of parquet-hadoop and trying to read with another version. Look out for the hive version run the below command # locate parquet-hadoop Please upgrade the parquet version being used and retry
... View more
08-27-2017
09:26 AM
@Alexander Carreño Nice to know you recreated the ambari databases and successfully launched the cluster installation. Depending on your cluster user production,test or trainging,its recommended you learn the good practice from day one. Never use embedded database but create custom database I have a preference for MariaDB or Mysql easy management and open source. You will have to create in advance databases for hive,oozie and ranger but its optional as you could do it during the configuration of the components. The disk partitioning should follow the same logic of in-built HDFS of storing 3 copies,the default replication factor of file blocks distribution in DataNodes. Having said that a good practice is to have 3 separate disk partition and different mount points. I see you are using ubuntu, you should have used custom partition eg mkdir -p /grid/0
mkdir -p /grid/1
mkdir -p /grid/2
mkdir -p /grid/x or another classic layout mkdir /u01
mkdir /u02
mkdir /u03
mkdir /u0x Please see this official cluster planning recommendation Taking into account that you have no component installed I would advice you to start afresh with a good structure. Re-install ubuntu partiion the disk accordingly see above document 3 partition of 300GB could do.
Prepare the environment, firewall,ntp,passwordless etc
Install MariaDB/Mysql Create the ambari database required
Create databases for oozie,hive and ranger optional can be setup later.
Chose the correct partition /grid/0 --/grid/x or /u01--/u0x The installation should be successful Please let me know if you need any info
... View more
08-26-2017
07:44 PM
@Micaël Dias I think you might need to update kafka-env.sh with additional configuration. Go to Ambari UI--->Kafka--->config--->Advanced Kafka-env--->kafka-env template Add the below line under # The java implementation to use. export KAFKA_KERBEROS_PARAMS="-Djava.security.auth.login.config=/etc/kafka/conf/kafka_jaas.conf" Save the changes and restart the Kafka service and any stale configurations. Keep me posted.
... View more
08-26-2017
07:20 PM
1 Kudo
@Makenzie Kalb I think this support KB is a solution to your issue Let me know if it helped.
... View more