Member since
08-08-2013
339
Posts
132
Kudos Received
27
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
14822 | 01-18-2018 08:38 AM | |
1570 | 05-11-2017 06:50 PM | |
9186 | 04-28-2017 11:00 AM | |
3437 | 04-12-2017 01:36 AM | |
2832 | 02-14-2017 05:11 AM |
12-12-2016
11:06 AM
Hi @slachterman , many thanks for this hint. Could you please send me the details of the processor config to drop the line if they are invalid? Thanks and regards...
... View more
12-12-2016
11:04 AM
Hi @aengineer , many thanks, I'll try to gather the needful and open a ticket there.
... View more
12-09-2016
07:28 PM
Hi @aengineer , It happens frequently. I created an oozie Job to collect the logs each night from the day before. The logs from yesterday have the same issue. The oozie Job runs at 3am, at that time the logs from the day before should have been closed correctly....I guess.
... View more
12-08-2016
04:53 PM
Hi, I configured Ranger to write audit-log to HDFS only. Now I have e.g. directories like /ranger/audit/hiveServer2/20161206
/ranger/audit/hiveServer2/20161207
...same for hdfs, hbase... At the end I am collecting all the single files per day (from any service) to one general folder, and put a Hive table on top. Similar to what is described here in HCC , just extended by collecting all dedicated files from the same day to a common directory to which the partition points to. Unfortunately the Hive-QL select statement fails with a JSON parse error, because some of the created log files are corrupt, invalid JSON, due to the last line is just cutted off, like e.g.: hdfs dfs -cat /ranger/audit/hiveServer2/20161207/hiveServer2_ranger_audit_<hostname>.log
...
{"repoType":3,"repo":"hdp_hive","reqUser":"xxxxxx","evtTime":"2016-12-07 08:13:20.276","access":"SELECT","resource":"xxxxxxx","resType":"@column","action":"QUERY but the first file from the same day looks fine: hdfs dfs -cat /ranger/audit/hiveServer2/20161207/hiveServer2_ranger_audit_<hostname>.1.log
...
{"repoType":3,"repo":"hdp_hive","reqUser":"xxxxx","evtTime":"2016-12-07 12:16:24.474","access":"USE","resource":"xxxx","resType":"@database","action":"SWITCHDATABASE","result":1,"policy":17,"enforcer":"ranger-acl","sess":"bf9a9f2e-ee90-4784-9d82-87008ad2e7fa","cliType":"HIVESERVER2","cliIP":"xxxxxx","reqData":"USE dbname","agentHost":"xxxxxxx","logType":"RangerAudit","id":"5b0b00ed-ed60-4817-85e0-e1c629952414","seq_num":213,"event_count":1,"event_dur_ms":0} What can cause those corrupt files? ...or what to do to be able to select the final Hive table without issue ?!?! env.: HDP2.3.4, Ranger policies for HDFS, Hive, HBase enabled, all configured to store audit to HDFS folder "/ranger/audit" Thanks for any hints...
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Ranger
09-28-2016
06:43 PM
Hi @ANSARI FAHEEM AHMED , looks like you are using MySQL as underlying db, and you need to set property 'max_allowed_packet' in mysql config file. With a default installation, it would be in file /etc/my.cnf , where you can add/set the property like e.g.: max_allowed_packet=256M Then restart mysql and give the ambar-server start another try. HTH, best regards
... View more
09-25-2016
08:39 AM
Many thanks, will switch to clusterdock.....looks very interesting btw 😉
... View more
09-25-2016
08:29 AM
Hi @dspivak , thanks for answering and sorry for the delay. I am running Fedora on the host and I just printed the memory consumption of the overall "situation". Are there any tweaks which needs to be done to run CM+Hadoop services within that container, or where does this "out of memory" come from? Any container limits to modify, but where ?!?! Thanks in advance...
... View more
09-22-2016
02:49 AM
Hello, I fetched the latest docker image for the quickstart single-node 'cluster', to start playing around on my host laptop having 16GB RAM and 8 CPU cores. After starting the image via sudo docker run --hostname=quickstart.cloudera --privileged=true -t -i -p 8888 -p 7180 cloudera/quickstart /bin/bash I got a command line. From there I started CM (after having started mysqld) via /home/cloudera/cloudera-manager --express and logged into it. Starting up HDFS didn't work because of error "Out of memory" for the Namenode. Snippet from /var/run/cloudera-scm-agent/process/16-hdfs-NAMENODE/hs_err_pid16341.log # Out of Memory Error (workgroup.cpp:96), pid=16341, tid=140217219892992 Then I stopped the ClouderaManagementServices and just started the "ServiceMonitor" followed by the Namenode, which then went up fine. After that I wanted to start "HostMonitor" from the Management Services, which again failed with "Out of system resources", but the container is showing: Cpu(s): 4.4%us, 0.3%sy, 0.0%ni, 94.9%id, 0.4%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 16313048k total, 8682216k used, 7630832k free, 115504k buffers
Swap: 16380k total, 0k used, 16380k free, 2847244k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
10432 cloudera 20 0 4834m 483m 30m S 23.3 3.0 7:20.70 java
960 root 20 0 2247m 53m 5836 S 0.3 0.3 0:21.10 cmf-agent Hence, there should be enough free resource to start the "HostMonitor" ?!?! Shouldn't the services in the container run smoothly without any resource issues by having 16GB and enough Cores on the host or do I miss something here ?!?! Any help for solving this resource issue highly appreciated 😄 Thanks and regards
... View more
Labels:
- Labels:
-
Cloudera Manager
-
Docker
-
HDFS
09-21-2016
01:24 PM
OMG, stupid me 😄 Thanks @mclark , exactly that solved the issue, sorry for bothering
... View more
09-21-2016
07:02 AM
Hello @mclark , thanks for explaining the versioning topic. Although it is a bit 'complex', if I interpret it correctly my combination of HDP 2.3.4 and NiFi0.6 (extracted from HDF1.2) should work, means the PutKafka should be able to write to kerberized Kafka0.9 with PLAINTEXTSASL. Unfortunately it is not, even after I modified the "Message Delimiter", either to 'new line' or to 'not set'. Just to ensure, the NiFi welcome page shows "Hortonworks Data Flow .... powered by Apache NiFi", the 'about' dialog is the one I pasted in this thread above, and the whole NiFi directory from where I start it has been extracted from HDF-1.2.0.1-1.tar.gz. Therefore I am pretty sure I am runnign HDF version of NiFi... What can I check further ? Thanks in advance....
... View more