Member since
06-26-2018
26
Posts
2
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3269 | 10-22-2019 09:24 AM | |
2282 | 10-29-2018 02:28 PM | |
10665 | 10-08-2018 08:36 AM |
03-31-2022
02:03 PM
hello I have the errors mentioned in the description and I am executing the procedure but I get the following errors in make -f Makefile.unx [root@emkioqlnclo01 isa-l]# make -f Makefile.unx ---> Building erasure_code/gf_vect_mul_sse.asm x86_64 ---> Building erasure_code/gf_vect_mul_avx.asm x86_64 ---> Building erasure_code/gf_vect_dot_prod_sse.asm x86_64 ---> Building erasure_code/gf_vect_dot_prod_avx.asm x86_64 ---> Building erasure_code/gf_vect_dot_prod_avx2.asm x86_64 ---> Building erasure_code/gf_2vect_dot_prod_sse.asm x86_64 ---> Building erasure_code/gf_3vect_dot_prod_sse.asm x86_64 ---> Building erasure_code/gf_4vect_dot_prod_sse.asm x86_64 ---> Building erasure_code/gf_5vect_dot_prod_sse.asm x86_64 ---> Building erasure_code/gf_6vect_dot_prod_sse.asm x86_64 ---> Building erasure_code/gf_2vect_dot_prod_avx.asm x86_64 ---> Building erasure_code/gf_3vect_dot_prod_avx.asm x86_64 ---> Building erasure_code/gf_4vect_dot_prod_avx.asm x86_64 ---> Building erasure_code/gf_5vect_dot_prod_avx.asm x86_64 ---> Building erasure_code/gf_6vect_dot_prod_avx.asm x86_64 ---> Building erasure_code/gf_2vect_dot_prod_avx2.asm x86_64 ---> Building erasure_code/gf_3vect_dot_prod_avx2.asm x86_64 ---> Building erasure_code/gf_4vect_dot_prod_avx2.asm x86_64 ---> Building erasure_code/gf_5vect_dot_prod_avx2.asm x86_64 ---> Building erasure_code/gf_6vect_dot_prod_avx2.asm x86_64 ---> Building erasure_code/gf_vect_mad_sse.asm x86_64 ---> Building erasure_code/gf_2vect_mad_sse.asm x86_64 ---> Building erasure_code/gf_3vect_mad_sse.asm x86_64 ---> Building erasure_code/gf_4vect_mad_sse.asm x86_64 ---> Building erasure_code/gf_5vect_mad_sse.asm x86_64 ---> Building erasure_code/gf_6vect_mad_sse.asm x86_64 ---> Building erasure_code/gf_vect_mad_avx.asm x86_64 ---> Building erasure_code/gf_2vect_mad_avx.asm x86_64 ---> Building erasure_code/gf_3vect_mad_avx.asm x86_64 ---> Building erasure_code/gf_4vect_mad_avx.asm x86_64 ---> Building erasure_code/gf_5vect_mad_avx.asm x86_64 ---> Building erasure_code/gf_6vect_mad_avx.asm x86_64 ---> Building erasure_code/gf_vect_mad_avx2.asm x86_64 ---> Building erasure_code/gf_2vect_mad_avx2.asm x86_64 ---> Building erasure_code/gf_3vect_mad_avx2.asm x86_64 ---> Building erasure_code/gf_4vect_mad_avx2.asm x86_64 ---> Building erasure_code/gf_5vect_mad_avx2.asm x86_64 ---> Building erasure_code/gf_6vect_mad_avx2.asm x86_64 ---> Building erasure_code/ec_multibinary.asm x86_64 multibinary.asm:283: error: expression syntax error multibinary.asm:359: error: expression syntax error make: *** [bin/ec_multibinary.o] Error 1
... View more
09-28-2020
11:49 AM
1 Kudo
Zookeeper does not allow listing or editing znodes if the current ACL doesn't have a set of permissions for the user or group. This is observed as a security authentication of znodes in all Cloudera Distros inherited from Apache Zookeeper. There are few references for the workaround, just compiling them together for Cloudera Managed clusters.
For the following error:
Authentication is not valid
There are two ways to address them:
Disable any ACL validation in Zookeeper (Not recommended):
Add the following config in CM > Zookeeper config > Search for 'Java Configuration Options for Zookeeper Server': -Dzookeeper.skipACL=yes
Then Restart and refresh the stale configs.
Add a Zookeeper super auth:
Skip the part added in <SKIP> if you want to use ‘password' as the auth key. <SKIP> cd /opt/cloudera/parcels/CDH/lib/zookeeper/
java -cp "./zookeeper.jar:lib/*" org.apache.zookeeper.server.auth.DigestAuthenticationProvider super:password Use the last line from the following output on running the above command : SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
super:password->super:DyNYQEQvajljsxlhf5uS4PJ9R28= </SKIP>
Add the following config in CM > Zookeeper config > Search 'Java Configuration Options for Zookeeper Server': -Dzookeeper.DigestAuthenticationProvider.superDigest=super:DyNYQEQvajljsxlhf5uS4PJ9R28=
Restart and refresh the stale configs.
Once connected to zookeeper-client, add the following command before executing any further command: addauth digest super:password
You will be able to run any operation on any znode post this command.
NOTE:
Version of slf4j-api may differ on later builds.
Update the super password to any string you desire. <password>
... View more
Labels:
06-02-2020
10:55 AM
- you have to change the owner of the file hadoop_amine@amine:/home/amine$ hadoop fs -chown -R hadoop_amine:hadoop_group /tmp/hadoop-yarn/staging/hadoop_amine/
... View more
10-23-2019
11:45 PM
You can try changing the limits from Ambari as well. Under Ambari > Yarn Configs > Advanced: Restart Yarn after increasing the limit.
... View more
10-22-2019
08:11 PM
I would suggest you to go through the below docs and verify the outbound rules on port 7180. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html
... View more
10-22-2019
12:05 PM
Good news. If that resolves your issue, please spare some time in accepting the solution. Thanks.
... View more
02-28-2019
11:51 AM
3 Kudos
Hi, If you don't want SMARTSENSE in your cluster but still it comes as default selected component during install wizard then go through below steps to save yourself some trouble. Tried on HDP Version : 3.0 and 3.1 Goto below path on the ambari-server node: /var/lib/ambari-server/resources/stacks/HDP/3.0/services/SMARTSENSE/metainfo.xml Open the above file in editor mode (e.g. vi) Uncomment or delete the below line [Line 23 may vary in different release] <selection>MANDATORY</selection> After making the above change restart ambari-server and proceed with cluster install wizard. Now SMARTSENSE won't be a mandate component. Thanks for reading.
... View more
Labels:
10-29-2018
04:21 PM
If it worked for you, please take a moment to login and "Accept" the answer.
... View more
10-08-2018
11:36 AM
You can try below changes in your submit command as they may be causing the hash value calculated to be different : Submit command : I believe you want to write abc.txt in s3a bucket hadoopsa under sample folder. As you have already set hadoopsa as your defaultFS. So you should use below command hdfs dfs -put abc.txt /sample/ #sample folder should be existing before command run.
OR
hdfs dfs -put abc.txt s3a://hadoopsa/sample/
In your command when you put a file directly in s3a://sample/ it assumes sample as a bucket and tries to write in the base path.
... View more