Member since
05-10-2016
303
Posts
35
Kudos Received
0
Solutions
10-10-2016
09:14 AM
Hi all, I've change ambari-metrics to "DISTRIBUTED" so with HDFS. when I've restarted ambari-metrics-collector, I got this messages : hbase-ams-regionserver.log 2016-10-10 10:58:56,981 INFO [B.defaultRpcServer.handler=26,queue=2,port=61300] master.HMaster: Client=ams/null create 'SYSTEM.CATALOG', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 => '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', coprocessor$3 => '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', coprocessor$4 => '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', coprocessor$5 => '|org.apache.phoenix.coprocessor.MetaDataEndpointImpl|805306366|', coprocessor$6 => '|org.apache.phoenix.coprocessor.MetaDataRegionObserver|805306367|'}, {NAME => '0', BLOOMFILTER => 'ROW', VERSIONS => '1000', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'true', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
2016-10-10 10:58:57,095 WARN [ProcedureExecutorThread-8] procedure.CreateTableProcedure: The table SYSTEM.CATALOG does not exist in meta but has a znode. run hbck to fix inconsistencies.
2016-10-10 10:58:57,308 INFO [ProcedureExecutorThread-8] procedure2.ProcedureExecutor: Rolledback procedure CreateTableProcedure (table=SYSTEM.CATALOG) id=26 owner=ams state=ROLLEDBACK exec-time=221msec exception=org.apache.hadoop.hbase.TableExistsException: SYSTEM.CATALOG
2016-10-10 10:59:45,777 INFO [timeline] timeline.HadoopTimelineMetricsSink: Unable to connect to collector, http://datanode001:6188/ws/v1/timeline/metrics
2016-10-10 10:59:45,778 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://datanode001:6188/ws/v1/timeline/metrics
2016-10-10 10:59:45,778 INFO [timeline] timeline.HadoopTimelineMetricsSink: Unable to connect to collector, http://datanode001:6188/ws/v1/timeline/metrics
2016-10-10 10:59:45,778 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://datanode001:6188/ws/v1/timeline/metrics
2016-10-10 10:59:45,779 INFO [timeline] timeline.HadoopTimelineMetricsSink: Unable to connect to collector, http://datanode001:6188/ws/v1/timeline/metrics
... View more
Labels:
- Labels:
-
Apache HBase
10-06-2016
12:30 PM
Thanks @Bryan Bende Perpahs, it was not cleared, I want to start 2 instances nifi with the same cluster (node1, node2, node2) so I would have 2 url ==> https://node1:9443 (instance1 ==> canva1) ==> https://node1:9444 (instance2 ==> canva2) It is possible ?
... View more
10-06-2016
08:15 AM
Hello all, I've trying configure two instances NIFI in the same cluster (3 nodes) My configuration : Instance 1 : -bash-4.1# grep port conf/nifi.properties
nifi.remote.input.socket.port=10443
nifi.web.http.port=
nifi.web.https.port=9443
nifi.cluster.node.protocol.port=11443
-bash-4.1# more conf/zookeeper.properties
clientPort=2181
server.1=nifi001:2888:3888
server.2=nifi002:2888:3888
server.3=nifi003:2888:3888
Instance 2 : -bash-4.1# grep port conf/nifi.properties
nifi.remote.input.socket.port=10444
nifi.web.http.port=
nifi.web.https.port=9444
nifi.cluster.node.protocol.port=11444
-bash-4.1# more conf/zookeeper.properties
clientPort=2182
server.1=nifi001:2889:3889
server.2=nifi002:2889:3889
server.3=nifi003:2889:3889 It's missing some things, because my nifi002 and nifi003 can joint the cluster.
... View more
Labels:
- Labels:
-
Apache NiFi
10-05-2016
09:44 AM
@Kuldeep Kulkarni : thanks, i'll check hdfs-audit.log but now my question how i can know what's command/job is behind this log for example : hdfs-audit.log:2016-10-05 03:00:17,303 INFO FSNamesystem.audit: allowed=true ugi=zazi (auth:TOKEN) via oozie/master003@fma.com (auth:TOKEN) ip=/10.xx.224.93 cmd=open src=/user/prote/private/flume/20161004/FlumeData.428492.gz dst=null perm=null proto=rpc callerContext=CLI
... View more
10-05-2016
08:52 AM
hello all, i must make an audit about the activity of HDFS, for example who was write on hdfs, who was copy from hdfs ? It is possible to see that some where ? regards
... View more
Labels:
- Labels:
-
Apache Hadoop
09-29-2016
02:17 PM
Hi all, I'm not developper, I'm admin for Hadoop plateform. We have intalled HDP 2.4.2 so with SPARK 1.6.1 my questions concerning Versionning about Python and R. All my servers are installed with Centos 6.8 with python 2.6.6 so it is possible to use PySpark ? My developper said his wants python 2.7.X I don't know why. If I need to install Python 2.7 or 3 is need to install on all plateform or just in one datanode or master? SparkR needs R to install, it is not ship with Spark ? thanks.
... View more
Labels:
- Labels:
-
Apache Spark
09-29-2016
08:48 AM
Thanks @Pierre Villard : i have followed your tuto for setup my cluster, but in my case primary and coordinator are always in the same node. Do you know that wrong ?
... View more
09-29-2016
07:57 AM
Hi all, In the cluster configuration where i can see which server ingest the data with the processor GetHDFS for example? thanks
... View more
Labels:
- Labels:
-
Apache NiFi