Member since
05-29-2017
408
Posts
123
Kudos Received
9
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3275 | 09-01-2017 06:26 AM | |
| 2118 | 05-04-2017 07:09 AM | |
| 1915 | 09-12-2016 05:58 PM | |
| 2641 | 07-22-2016 05:22 AM | |
| 2054 | 07-21-2016 07:50 AM |
08-02-2016
01:14 PM
Team, When i run solr index job then it run in default queue so is there any property or solution where I can change it to some other queue. [root@m2 solr]# hadoop jar
/opt/lucidworks-hdpsearch/job/lucidworks-hadoop-job-2.0.3.jar
com.lucidworks.hadoop.ingest.IngestJob -DcsvFieldMapping=0=id,1=cat,2=name,3=price,4=instock,5=author
-DcsvFirstLineComment -DidField=id -DcsvDelimiter=","
-Dlww.commit.on.close=true -cls com.lucidworks.hadoop.ingest.CSVIngestMapper -c
test -i csv/* -of com.lucidworks.hadoop.io.LWMapRedOutputFormat -zk m1.hdp22:2181,m2.hdp22:2181,w1.hdp22:2181/solr
... View more
Labels:
- Labels:
-
Apache Solr
08-01-2016
12:01 PM
1 Kudo
There is situation when unfortunately and unknowingly you delete /hdp/apps/2.3.4.0-3485 either with skipTrash or without skipTrash then you will be in trouble and other services will be impacted. You will not be able to run hive,mapreduce or sqoop command, You will get following error. Case 1: If you deleted it without skipTrash then it is very easy to recover: [root@m1 ranger-hdfs-plugin]# hadoop fs -rmr /hdp/apps/2.3.4.0-3485 rmr: DEPRECATED: Please use ‘rm -r’ instead. 16/07/28 01:59:22 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 360 minutes, Emptier interval = 0 minutes. Moved: 'hdfs://HDPTSTHA/hdp/apps/2.3.4.0' to trash at: hdfs://HDPTSTHA/user/hdfs/.Trash/Current In this case it would be very easy to recover it as after deleting it goes to your current dir and you can recover it from there. hadoop fs -put hdfs://HDPTSTHA/user/hdfs/.Trash/Current//hdp/apps/2.3.4.0 /hdp/apps/
Case 2: If you deleted it with -skipTrash then you need to execute following steps: [root@m1 ranger-hdfs-plugin]# hadoop fs -rmr -skipTrash /hdp/apps/2.3.4.0-3485 rmr: DEPRECATED: Please use ‘rm -r’ instead. Deleted /hdp/apps/2.3.4.0-3485 So when I am trying to access to hive it is throwing below error. [root@m1 admin]# hive WARNING: Use “yarn jar” to launch YARN applications. 16/07/27 22:05:04 WARN conf.HiveConf: HiveConf of name hive.server2.enable.impersonation does not exist Logging initialized using configuration in file:/etc/hive/2.3.4.0-3485/0/hive-log4j.properties Exception in thread “main” java.lang.RuntimeException: java.io.FileNotFoundException: File does not exist: /hdp/apps/2.3.4.0-3485/tez/tez.tar.gz at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:507) Resolution: Don’t worry friends you can resolve this issue by following give steps. Note: You have to replace version of your hdp. Step 1: First you will have to create following required dirs : hdfs dfs -mkdir -p /hdp/apps/<2.3.4.0-$BUILD>/mapreduce hdfs dfs -mkdir -p /hdp/apps/<2.3.4.0-$BUILD>/hive hdfs dfs -mkdir -p /hdp/apps/<2.3.4.0-$BUILD>/tez hdfs dfs -mkdir -p /hdp/apps/<2.3.4.0-$BUILD>/sqoop hdfs dfs -mkdir -p /hdp/apps/<2.3.4.0-$BUILD>/pig Step 2: Now you have to copy required jars in related dir. hdfs dfs -put /usr/hdp/2.3.4.0-$BUILD/hadoop/mapreduce.tar.gz /hdp/apps/2.3.4.0-$BUILD/mapreduce/
hdfs dfs -put /usr/hdp/2.3.2.0-<$version>/hive/hive.tar.gz /hdp/apps/2.3.2.0-<$version>/hive/ hdfs dfs -put /usr/hdp/<hdp_version>/tez/lib/tez.tar.gz /hdp/apps/<hdp_version>/tez/ hdfs dfs -put /usr/hdp/<hdp-version>/sqoop/sqoop.tar.gz /hdp/apps/<hdp-version>/sqoop/ hdfs dfs -put /usr/hdp/<hdp-version>/pig/pig.tar.gz /hdp/apps/<hdp-version>/pig/ Step 3: Now you need to change dir owner and then change permission: hdfs dfs -chown -R hdfs:hadoop /hdp hdfs dfs -chmod -R 555 /hdp/apps/2.3.4.0-$BUILD Now you will be able to start your hive CLI or other jobs. [root@m1 ~]# hive WARNING: Use “yarn jar” to launch YARN applications. 16/07/27 23:33:42 WARN conf.HiveConf: HiveConf of name hive.server2.enable.impersonation does not exist Logging initialized using configuration in file:/etc/hive/2.3.4.0-3485/0/hive-log4j.properties hive> I hope it will help you to restore your cluster. Please feel free to give your suggestion
... View more
Labels:
08-01-2016
05:52 AM
Thanks @mqureshi. Above solution will fix the hive issue but do you know how to get other files as well. 79.1 M /hdp/apps/2.2.0.0-2041/hive 182.6 M /hdp/apps/2.2.0.0-2041/mapreduce 92.6 M /hdp/apps/2.2.0.0-2041/pig 5.2 M /hdp/apps/2.2.0.0-2041/sqoop 34.8 M /hdp/apps/2.2.0.0-2041/tez
... View more
07-31-2016
03:12 PM
Team: By mistake I removed /hdp/apps/2.3.4.0-3485 with -skipTrash and because of that hive and other services are throwing an error. So is there anyway to restore it or any other alternative. [root@m1 ranger-hdfs-plugin]# hadoop fs -rmr -skipTrash /hdp/apps/2.3.4.0-3485 rmr: DEPRECATED: Please use 'rm -r' instead. Deleted /hdp/apps/2.3.4.0-3485 So when I am trying to access to hive it is throwing below error. [root@m1 admin]# hive WARNING: Use "yarn jar" to launch YARN applications. 16/07/27 22:05:04 WARN conf.HiveConf: HiveConf of name hive.server2.enable.impersonation does not exist Logging initialized using configuration in file:/etc/hive/2.3.4.0-3485/0/hive-log4j.properties Exception in thread "main" java.lang.RuntimeException: java.io.FileNotFoundException: File does not exist: /hdp/apps/2.3.4.0-3485/tez/tez.tar.gz at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:507)
... View more
07-31-2016
03:43 AM
1 Kudo
I have seen an issue with Application Timeline Server (ATS). Actually Application Timeline Server (ATS) uses a LevelDB database which is stored in the location specified by yarn.timeline-service.leveldb-timeline-store.path in yarn-site.xml.All metadata store in *.sst files under specified location. Due to this we may face an space issue.But It is not good practice to delete *.sst files directly. An *.sst file is a sorted table of key/value entries sorted by key and key/value entries are partitioned into different *.sst files by key instead of timestamp, such that there’s actually no old *.sst file to delete. But to solve the space of the leveldb storage, you can enable TTL (time to live). Once it is enabled, the timeline entities out of ttl will be discarded and you can set ttl to a smaller number than the default to give a timeline entity shorter lifetime. <property>
<description>Enable age off of timeline store data.</description>
<name>yarn.timeline-service.ttl-enable</name>
<value>true</value>
</property> <property>
<description>Time to live for timeline store data in milliseconds.</description>
<name>yarn.timeline-service.ttl-ms</name>
<value>604800000</value>
</property> But if by mistake you deleted these files manually as I did then you may see ATS issue or you may get following error. error code: 500, message: Internal Server Error{“message”:”Failed to fetch results by the proxy from url: http://server:8188/ws/v1/timeline/TEZ_DAG_ID?limit=11&_=1469716920323&primaryFilter=user:$user&”,”status”:500,”trace”:”{\”exception\”:\”WebApplicationException\”,\”message\”:\”java.io.IOException: org.iq80.leveldb.DBException: org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: /hadoop/yarn/timeline/leveldb-timeline-store.ldb/6378017.sst: No such file or directory\”,\”javaClassName\”:\”javax.ws.rs.WebApplicationException\”}”} Or (AbstractService.java:noteFailure(272)) – Service org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore failed in state INITED; cause: org.fusesource.leveldbjni.internal.NativeDB$DBException: Corruption: 116 missing files; e.g.: /tmp/hadoop/yarn/timeline/leveldb-timeline-store.ldb/001052.sst
org.fusesource.leveldbjni.internal.NativeDB$DBException: Corruption: 116 missing files; e.g.: /tmp/hadoop/yarn/timeline/leveldb-timeline-store.ldb/001052.sst Resolution: Goto configured location /hadoop/yarn/timeline/leveldb-timeline-store.ldb and then you will see a text file named “CURRENT” cd /hadoop/yarn/timeline/leveldb-timeline-store.ldb ls -ltrh | grep -i CURRENT Copy your CURRENT file to some temporary location cp /hadoop/yarn/timeline/leveldb-timeline-store.ldb/CURRENT /tmp Now you need to remove this file rm /hadoop/yarn/timeline/leveldb-timeline-store.ldb/CURRENT Restart the YARN service via Ambari With the help of above steps I have resolved this issue. I hope it will help you as well.
... View more
Labels:
07-26-2016
09:24 AM
@Predrag Minovic. I also had same issue and I tried to apply your given workaround but it failed with following error. [ams@server resource_monitoring]$ python psutil/build.py Executing make at location: /usr/lib/python2.6/site-packages/resource_monitoring/psutil psutil build failed. Please find build output at: /usr/lib/python2.6/site-packages/resource_monitoring/psutil/build.out When I checked given build.out file it gave me following output. So Can you please help me to resolve it. rm -f `find . -type f -name \*.py[co]` rm -f `find . -type f -name \*.so` rm -f `find . -type f -name .\*~` rm -f `find . -type f -name \*.orig` rm -f `find . -type f -name \*.bak` rm -f `find . -type f -name \*.rej` rm -rf `find . -type d -name __pycache__` rm -rf *.egg-info rm -rf *\estfile* rm -rf build rm -rf dist rm -rf docs/_build python setup.py build running build running build_py creating build creating build/lib.linux-x86_64-2.6 creating build/lib.linux-x86_64-2.6/psutil copying psutil/__init__.py -> build/lib.linux-x86_64-2.6/psutil copying psutil/_psosx.py -> build/lib.linux-x86_64-2.6/psutil copying psutil/_psposix.py -> build/lib.linux-x86_64-2.6/psutil copying psutil/_common.py -> build/lib.linux-x86_64-2.6/psutil copying psutil/_pslinux.py -> build/lib.linux-x86_64-2.6/psutil copying psutil/_psbsd.py -> build/lib.linux-x86_64-2.6/psutil copying psutil/_compat.py -> build/lib.linux-x86_64-2.6/psutil copying psutil/_pssunos.py -> build/lib.linux-x86_64-2.6/psutil copying psutil/_pswindows.py -> build/lib.linux-x86_64-2.6/psutil running build_ext building '_psutil_linux' extension creating build/temp.linux-x86_64-2.6 creating build/temp.linux-x86_64-2.6/psutil gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/usr/include/python2.6 -c psutil/_psutil_linux.c -o build/temp.linux-x86_64-2.6/psutil/_psutil_linux.o psutil/_psutil_linux.c: In function ‘psutil_proc_cpu_affinity_set’: psutil/_psutil_linux.c:327: warning: suggest explicit braces to avoid ambiguous ‘else’ gcc -pthread -shared build/temp.linux-x86_64-2.6/psutil/_psutil_linux.o -L/usr/lib64 -lpython2.6 -o build/lib.linux-x86_64-2.6/_psutil_linux.so /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../crti.o: could not read symbols: File in wrong format collect2: ld returned 1 exit status error: command 'gcc' failed with exit status 1 make: *** [build] Error 1
... View more
07-24-2016
05:22 AM
Thanks @mqureshi. I understood your point and agree kerberos is always best option to secure cluster. But I am looking an alternative which I can use to secure solr like knox with ranger or something else. So do we have any alternative ?
... View more
07-22-2016
06:17 AM
@Ancil McBarnett: I also have requirement where i want to secure solr with Ranger but I don't have kerberos env. So is it possible to secure with Rnager without kerberos ?
... View more
07-22-2016
06:06 AM
2 Kudos
@Jonas Strau Can we plugin Rnager for solr without kerberos ?
... View more
07-22-2016
05:27 AM
Thanks @Ravi. I have solved it by changing value of SOLR_HEAP to
1024 MB in /opt/lucidworks-hdpsearch/solr/bin/solr.in.sh. Thanks once again for
all your help. SOLR_HEAP="1024m" [solr@m1 solr]$
/opt/lucidworks-hdpsearch/solr/bin/solr create -c test -d
/opt/lucidworks-hdpsearch/solr/server/solr/configsets/data_driven_schema_configs_hdfs/conf
-n test -s 2 -rf 2
... View more