Member since
01-26-2018
34
Posts
1
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1735 | 07-12-2018 03:34 PM | |
2432 | 02-21-2018 05:44 AM | |
1693 | 02-21-2018 05:07 AM |
10-03-2018
08:42 AM
@Robert Levas : Thanks a lot for the solution, 'unsupported type' issue has been resolved after kerberose config changes. I have raised another issue for storm spenago load balancer related here.
... View more
07-12-2018
03:34 PM
1 Kudo
@Lija Mohan, Seems this is metron maven build related issue in windows platofrm. This is due to metron-config & metron-alerts UI module npm build issue. Could you try commenting out those under metron-interface module's pom as a work around ? Hope you don't need UI projects built in your local for this. I will post here If I found a tweak or fix for the UI module builds.
... View more
03-20-2018
08:47 AM
@asubramanian : Does the HS is Hive Server ? If not. Where is the optimum node to put Hive server ? I have pretty much lower resources in every node for testing purpose ( 8GB RAM & Dual core ).
... View more
02-21-2018
05:44 AM
I was able to resolve my issue. Thanks a lot @Jay Kumar SenSharma. I have added the solution here
... View more
02-21-2018
05:07 AM
I figured out the root cause and it solved my issue. Root cause was the 5th point in this link. Seems after I bring down EBS volume available space I had to decrease the 'Reserved space for HDFS' in ambari hdfs service advanced configuration. This is the dfs.datanode.du.reserved property. This was higher than the available space. Once I brought it down everything is back to normal 🙂
... View more
02-20-2018
04:57 PM
I have an ambari managed 10 node hdp cluster (2.5.0) deployed in amazon EC2 instance centos 7. I have mounted an EBS volume under /data mount point and configured that as namenode and datanode directories. Everything was working fine. For some reason I have to change the EBS volume. So I followed the below steps. 1- Stop all service from ambari 2- Mount the new EBS volume under /data mount point 3- Restart all amazon ec2 instances 4- Start services using ambari. After 4 th step my hdfs is not working properly and hence hbase service is also failing. I am not getting any errors in either datanode or namenode start. And sees the status in ambari as green. When I do hdfs dfsadmin -report I get following output.
[hdfs@ip-172-31-29-141 ~]$ hdfs dfsadmin -report
Configured Capacity: 0 (0 B)
Present Capacity: 131072 (128 KB)
DFS Remaining: 0 (0 B)
DFS Used: 131072 (128 KB)
DFS Used%: 100.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
-------------------------------------------------
Live datanodes (4):
Name: 172.31.31.118:50010 (ip-172-31-31-118.ec2.internal)
Hostname: ip-172-31-31-118.ec2.internal
Decommission Status : Normal
Configured Capacity: 0 (0 B)
DFS Used: 32768 (32 KB)
Non DFS Used: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used%: 100.00%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 2
Last contact: Tue Feb 20 16:26:29 UTC 2018
Name: 172.31.31.114:50010 (ip-172-31-31-114.ec2.internal)
Hostname: ip-172-31-31-114.ec2.internal
Decommission Status : Normal
Configured Capacity: 0 (0 B)
DFS Used: 32768 (32 KB)
Non DFS Used: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used%: 100.00%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 2
Last contact: Tue Feb 20 16:26:29 UTC 2018
Name: 172.31.18.247:50010 (ip-172-31-18-247.ec2.internal)
Hostname: ip-172-31-18-247.ec2.internal
Decommission Status : Normal
Configured Capacity: 0 (0 B)
DFS Used: 32768 (32 KB)
Non DFS Used: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used%: 100.00%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 2
Last contact: Tue Feb 20 16:26:29 UTC 2018
Name: 172.31.28.137:50010 (ip-172-31-28-137.ec2.internal)
Hostname: ip-172-31-28-137.ec2.internal
Decommission Status : Normal
Configured Capacity: 0 (0 B)
DFS Used: 32768 (32 KB)
Non DFS Used: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used%: 100.00%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 2
Last contact: Tue Feb 20 16:26:29 UTC 2018
[hdfs@ip-172-31-29-141 ~]$ hdfs dfsadmin -report
Configured Capacity: 0 (0 B)
Present Capacity: 131072 (128 KB)
DFS Remaining: 0 (0 B)
DFS Used: 131072 (128 KB)
DFS Used%: 100.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
-------------------------------------------------
Live datanodes (4):
Name: 172.31.31.118:50010 (ip-172-31-31-118.ec2.internal)
Hostname: ip-172-31-31-118.ec2.internal
Decommission Status : Normal
Configured Capacity: 0 (0 B)
DFS Used: 32768 (32 KB)
Non DFS Used: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used%: 100.00%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 2
Last contact: Tue Feb 20 16:26:29 UTC 2018
Name: 172.31.31.114:50010 (ip-172-31-31-114.ec2.internal)
Hostname: ip-172-31-31-114.ec2.internal
Decommission Status : Normal
Configured Capacity: 0 (0 B)
DFS Used: 32768 (32 KB)
Non DFS Used: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used%: 100.00%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 2
Last contact: Tue Feb 20 16:26:29 UTC 2018
Name: 172.31.18.247:50010 (ip-172-31-18-247.ec2.internal)
Hostname: ip-172-31-18-247.ec2.internal
Decommission Status : Normal
Configured Capacity: 0 (0 B)
DFS Used: 32768 (32 KB)
Non DFS Used: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used%: 100.00%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 2
Last contact: Tue Feb 20 16:26:29 UTC 2018
Name: 172.31.28.137:50010 (ip-172-31-28-137.ec2.internal)
Hostname: ip-172-31-28-137.ec2.internal
Decommission Status : Normal
Configured Capacity: 0 (0 B)
DFS Used: 32768 (32 KB)
Non DFS Used: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used%: 100.00%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 2
Last contact: Tue Feb 20 16:26:29 UTC 2018
The issue is my hbase service is not starting. Error I get in hbase log file is as follows 8-02-20 11:56:44,465 WARN [Thread-70] hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /apps/hbase/data/.tmp/hbase.version could only be replicated to 0 nodes instea
d of minReplication (=1). There are 4 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1649)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3198)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3122)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:843)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB
.java:500)
Also I get the similiar error when I try to put some file in hdfs via command line. Error I get for the command 'hdfs dfs -put ./x2.txt /' e.hadoop.ipc.RemoteException(java.io.IOException): File /x2.txt._COPYING_ could only be replicated to 0 nodes instead of minReplicatio
n (=1). There are 4 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1649)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3198)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3122)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:843)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB
.java:500)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtoco
lProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
at org.apache.hadoop.ipc.Client.call(Client.java:1496)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:457)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
at com.sun.proxy.$Proxy11.addBlock(Unknown Source)
What could be causing this ?
... View more
Labels:
02-20-2018
11:04 AM
@Jay Kumar SenSharma In addition to that my hbase service is not able to start due to the below error. The EBS volume I have changed is the data node directory I configured. And I had data in hbase before doing that. In order to avoid that do I need to anything else ? 8-02-20 11:56:44,465 WARN [Thread-70] hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /apps/hbase/data/.tmp/hbase.version could only be replicated to 0 nodes instea
d of minReplication (=1). There are 4 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1649)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3198)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3122)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:843)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB
.java:500)
... View more
02-20-2018
05:00 AM
@Jay Kumar SenSharma
When I stop all and start all services I get the below error in zepplin notebook start service. Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0/package/scripts/master.py", line 522, in <module>
Master().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 367, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0/package/scripts/master.py", line 254, in start
self.create_zeppelin_dir(params)
File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0/package/scripts/master.py", line 89, in create_zeppelin_dir
replace_existing_files=True,
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 604, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 601, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 336, in action_delayed
self._create_resource()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 352, in _create_resource
self._create_file(self.main_resource.resource.target, source=self.main_resource.resource.source, mode=self.mode)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 467, in _create_file
self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, assertable_result=False, file_to_put=source, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 177, in run_command
return self._run_command(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 235, in _run_command
_, out, err = get_user_call_output(cmd, user=self.run_user, logoutput=self.logoutput, quiet=False)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_user_call_output.py", line 61, in get_user_call_output
raise ExecutionFailed(err_msg, code, files_output[0], files_output[1])
resource_management.core.exceptions.ExecutionFailed: Execution of 'curl -sS -L -w '%{http_code}' -X PUT --data-binary @/usr/hdp/current/zeppelin-server/interpreter/spark/dep/zeppelin-spark-dependencies-0.6.0.2.5.3.0-37.jar -H 'Content-Type: application/octet-stream' 'http://ip-172-31-31-102.ec2.internal:50070/webhdfs/v1/apps/zeppelin/zeppelin-spark-dependencies-0.6.0.2.5.3.0-37.jar?op=CREATE&user.name=hdfs&overwrite=True&permission=444' 1>/tmp/tmp0f3h5s 2>/tmp/tmpLIZ7_n' returned 55. curl: (55) Send failure: Connection reset by peer
201
... View more
02-19-2018
06:27 PM
I have set up ambari cluster with 9 nodes. Everything was working fine perfectly. For some reason I have to change data directory I have given for hdfs to another disc. I have stopped all the services and mounted new hard disc properly to the same old mount point. I have rebooted all the nodes after mounting new device. Now when I start all services again. All the services getting failed. What could be the reason ? Do I need to do any step here ?
... View more
Labels:
- Labels:
-
Apache Ambari
02-16-2018
12:29 PM
@George Vetticaden : I have tried the above steps in my hcp cluster with hdp - 2.5.3.0 along with metron UI manager. I don't need to do step 2 right ? This is the same as the enrichment configuration done via metron UI right ? My enrichment configuration json is as follows. This will suffice here for step 2 right ? I ran the file loader script without -n option. /usr/metron/0.1BETA/bin/flatfile_loader.sh -i whois_ref.csv -t enrichment -c t -e extractor_config.json {
"enrichment": {
"fieldMap": {},
"fieldToTypeMap": {
"url": [
"whois"
]
},
"config": {}
},
"threatIntel": {
"fieldMap": {},
"fieldToTypeMap": {},
"config": {},
"triageConfig": {
"riskLevelRules": [],
"aggregator": "MAX",
"aggregationConfig": {}
}
},
"configuration": {}
}
<br> Unfortunately my enrichment is not working. My kafka topic message coming in indexing topic is as follows. {"code":200,"method":"GET","enrichmentsplitterbolt.splitter.end.ts":"1518783891207","enrichmentsplitterbolt.splitter.begin.ts":"1518783891207","is_alert":"true","url":"https:\/\/www.woodlandworldwide.com\/","source.type":"newtest","elapsed":2033,"ip_dst_addr":"182.71.43.17","original_string":"1518783890.244 2033 127.0.0.1 TCP_MISS\/200 49602 GET https:\/\/www.woodlandworldwide.com\/ - HIER_DIRECT\/182.71.43.17 text\/html\n","threatintelsplitterbolt.splitter.end.ts":"1518783891211","threatinteljoinbolt.joiner.ts":"1518783891213","bytes":49602,"enrichmentjoinbolt.joiner.ts":"1518783891209","action":"TCP_MISS","guid":"40ff89bf-71a1-4eec-acfd-d89886c9ce7f","threatintelsplitterbolt.splitter.begin.ts":"1518783891211","ip_src_addr":"127.0.0.1","timestamp":1518783890244}
<br> I have tried adding both https:www.woodlandworldwide.com and just woodlandworldwide.com as in your example. But no luck. How metron queries hbase table ? Will it query to get domain similiar to url ?
... View more