Member since
02-13-2017
59
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
528 | 11-01-2017 12:30 PM |
12-17-2020
09:20 PM
Hi , After accessing the hbase master ui link in production , under hbck we see there are 967 regions under ophan regions in filesystem. Can someone please help us as to how can we clear them. Thanks. @gsharma
... View more
Labels:
12-14-2020
08:15 PM
Hi , I tried the below but the notebook was not deleted. Thanks ASIF.
... View more
12-13-2020
04:32 AM
Hi As per the link, https://zeppelin.apache.org/docs/0.8.0/usage/rest_api/interpreter.html#restart-an-interpreter To restart zeppelin interpreter we need to know interpreter id , which we are unable to find . https://zeppelin.apache.org/docs/0.8.0/usage/rest_api/interpreter.html#restart-an-interpreter Also interpreter restart can only be done with admin user, hence that also needs to be passed via script. Thanks ASIF.
... View more
12-11-2020
08:27 AM
@Scharan Can you please guide us with the below , as need to restart jdbc interpreter using shell script / curl command. Thanks ASIF.
... View more
12-11-2020
08:25 AM
@Akhil S Naik @felix Albani @Scharan
... View more
12-11-2020
03:41 AM
Hi , We need to delete some zeppelin notebooks using the notebookid via REST API or curl command. The default storage of zeppelin notebooks are on hdfs. hdp 3.1.5. Zeppelin 0.8.0 Kindly help us with the same. @Akhil S Naik @Felix Albani @Scharan
... View more
Labels:
12-09-2020
12:05 AM
@youngick As per Cloudera this is an open bug in apache which is not solved in hdp 3.1.5 as well. So as a workaround need to restart jdbc interpreter using cronjob . Thanks ASIF.
... View more
12-08-2020
11:24 PM
Team, Many users are facing intermittent issues using jdbc interpreter Issue : Unable to instantiate org.apache.hadoop.hive.q1.metadata.SessionHiveMetaStoreClient Workaround is to restart jdbc interpreter , so as to automate this process need to add this in a shell script. Can someone help us on this. Thanks ASIF.
... View more
Labels:
10-27-2020
08:08 PM
Hi @slambe , Thank you for the below answer, we tried with the describe 'atlas_janus' command but it didnt work out for the user. As he needs to create a view out of atlas_janus table and access the table via phoenix, but he cant find all required schema using descibe command. Can you help us out with the create table statement for atlas_janus table . Thanks. ASIF.
... View more
10-27-2020
02:04 AM
Team, We have spark jobs running in cluster mode and using Hiveserver2 , wherein the data is ingested from a tool and then loaded to hive. the below error is observed while connecting to hive and its shown intermittently. We checked hive server is working normal with no issues. Let us know if any parameter to be added in spark-submit command for below resolution ******************************************************* spark-submit command /usr/hdp/current/spark2-client/bin/spark-submit --master yarn --queue udif --driver-memory ${driver_memory} --num-executors ${num_executors} --executor-memory ${executor_memory} --executor-cores 4 --conf spark.port.maxRetries=50 --conf spark.network.timeout=600s --conf spark.executor.heartbeatInterval=200s --class custom-class --conf spark.security.credentials.hiveserver2.enabled=true --conf spark.sql.hive.hiveserver2.jdbc.url="hive-jdbc string" --jars custom-jars *********************************************** Error *********************************************** Caused by: java.sql.SQLException: Could not open client transport for any of the Server URI's in ZooKeeper: Could not establish connection to jdbc:hive2://hive-host:10001/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;auth=delegationToken: HTTP Response code: 401 at shadehive.org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:344) at shadehive.org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107) at org.apache.commons.dbcp2.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:53) at org.apache.commons.dbcp2.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:291) at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:883) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:436) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:365) at org.apache.commons.dbcp2.PoolingDataSource.getConnection(PoolingDataSource.java:134) at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1563) at com.hortonworks.spark.sql.hive.llap.JDBCWrapper.getConnector(HS2JDBCWrapper.scala:424) at com.hortonworks.spark.sql.hive.llap.JDBCWrapper.getConnector(HS2JDBCWrapper.scala:453) at com.hortonworks.spark.sql.hive.llap.DefaultJDBCWrapper.getConnector(HS2JDBCWrapper.scala) at com.hortonworks.spark.sql.hive.llap.HiveWarehouseSessionImpl.lambda$new$0(HiveWarehouseSessionImpl.java:85) at com.hortonworks.spark.sql.hive.llap.HiveWarehouseSessionImpl.executeUpdate(HiveWarehouseSessionImpl.java:205) ***********************************************************
... View more
- Tags:
- HDP
- hiveserver2
- Spark
Labels:
10-27-2020
01:45 AM
Team, Could you provide me the schema details / create table statement for atlas_janus (hbase table) , as in a usecase we need to access this table via phoenix wherein the schema information for the table is required. Thanks. ASIF.
... View more
Labels:
03-18-2019
01:56 PM
Hi Team, Do we have a way to enable auto kinit for all the Active Directory Users when they login to edge node. sssd service is enabled and integrated with AD. Needed to know if we can add any parameter to enable auto kinit for the users.
... View more
02-08-2019
01:44 PM
@Raghavendra
Rao
... View more
02-08-2019
01:39 PM
Hi Team, Whenever AD user is trying to access fileview , the namenode goes down. We investigated and found issue is with sssd as we need to add user search filter from the OU and not from the mail domain. But in the OU we have just groups and users are mapped in the OU. Can anyone help us with ldap filter to search users from the OU and not from the main domain (only groups are present in the OU and users are mapped). Thanks,
... View more
11-14-2018
03:32 PM
2018-11-14T20:36:33,075 INFO [main] org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool - Creating metastore client for PreUpgradeTool
2018-11-14T20:36:33,106 INFO [main] hive.metastore - Trying to connect to metastore with URI thrift://sjdcdlake02.np1.ril.com:9083
2018-11-14T20:36:33,328 INFO [main] hive.metastore - Opened a connection to metastore, current connections: 1
2018-11-14T20:36:33,329 INFO [main] hive.metastore - Connected to metastore.
2018-11-14T20:36:34,533 INFO [main] hive.metastore - Trying to connect to metastore with URI thrift://sjdcdlake02.np1.ril.com:9083
2018-11-14T20:36:34,548 INFO [main] hive.metastore - Opened a connection to metastore, current connections: 2
2018-11-14T20:36:34,549 INFO [main] hive.metastore - Connected to metastore.
2018-11-14T20:36:36,572 ERROR [main] org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool - PreUpgradeTool failed
org.apache.hadoop.hive.metastore.api.MetaException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby
at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:2015)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1404)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4137)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1137)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:866)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_result$get_table_resultStandardScheme.read(ThriftHiveMetastore.java:53086) ~[hive-metastore-2.1.0.2.6.4.0-91.jar:2.1.0.2.6.4.0-91]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_result$get_table_resultStandardScheme.read(ThriftHiveMetastore.java:53063) ~[hive-metastore-2.1.0.2.6.4.0-91.jar:2.1.0.2.6.4.0-91]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_result.read(ThriftHiveMetastore.java:52994) ~[hive-metastore-2.1.0.2.6.4.0-91.jar:2.1.0.2.6.4.0-91]
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86) ~[libthrift-0.9.3.jar:0.9.3]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_table(ThriftHiveMetastore.java:1507) ~[hive-metastore-2.1.0.2.6.4.0-91.jar:2.1.0.2.6.4.0-91]
... View more
11-05-2018
02:15 PM
@Ronak bansal
... View more
11-05-2018
02:14 PM
Atlas Webui is working fine and i am able to login in atlas on port 21000, but i get the below alert and unable to understand how to get rid of this alert. Kindly help me with the below alert : Connection failed to http://localhost:21000/api/atlas/admin/status (Execution of 'curl --location-trusted -k --negotiate -u : -b /var/lib/ambari-agent/tmp/cookies/629e7881-187d-4548-8fa3-7425d62dce9e -c /var/lib/ambari-agent/tmp/cookies/629e7881-187d-4548-8fa3-7425d62dce9e -w '%{http_code}' http://localhost:21000/api/atlas/admin/status --connect-timeout 5 --max-time 7 -o /dev/null 1>/tmp/tmpNteFoC 2>/tmp/tmpeS6CDs' returned 28. % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:03 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:56 --:--:-- 0
curl: (28) Resolving timed out after 5517 milliseconds
000)
... View more
Labels:
11-05-2018
08:49 AM
Please find the below logs while namenode reboot. sterr: File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/jmx.py", line 42, in get_value_from_jmx
return data_dict["beans"][0][property]
IndexError: list index out of range
2018-11-05 13:27:52,753 - Getting jmx metrics from NN failed. URL: http://sjdcdlake02.np1.ril.com:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/jmx.py", line 42, in get_value_from_jmx
return data_dict["beans"][0][property]
IndexError: list index out of range
Python script has been killed due to timeout after waiting 1800 secs stdout: 2018-11-05 13:27:55,682 - call returned (255, '18/11/05 13:27:55 INFO ipc.Client: Retrying connect to server: sjdcdlake02.np1.ril.com/10.21.51.76:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)\nOperation failed: Call From sjdcdlake02.np1.ril.com/10.21.51.76 to sjdcdlake02.np1.ril.com:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused')
2018-11-05 13:27:55,683 - NameNode HA states: active_namenodes = [(u'nn1', 'sjdcdlake01.np1.ril.com:50070')], standby_namenodes = [], unknown_namenodes = [(u'nn2', 'sjdcdlake02.np1.ril.com:50070')]
2018-11-05 13:27:55,684 - Will retry 8 time(s), caught exception: The NameNode nn2 is not listed as Active or Standby, waiting.... Sleeping for 25 sec(s)
... View more
Labels:
11-01-2018
03:15 PM
Using Hive configuration directory [/usr/hdp/2.6.2.0-205/hive2/conf]
/usr/hdp/2.6.4.0-91/hadoop/conf:/usr/hdp/2.6.4.0-91/hadoop/lib/*:/usr/hdp/2.6.4.0-91/hadoop/.//*:/usr/hdp/2.6.4.0-91/hadoop-hdfs/./:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/*:/usr/hdp/2.6.4.0-91/hadoop-hdfs/.//*:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/*:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//*:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/lib/*:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//*:/usr/hdp/2.6.4.0-91/tez/*:/usr/hdp/2.6.4.0-91/tez/lib/*:/usr/hdp/2.6.4.0-91/tez/conf
Log file for import is /usr/hdp/current/atlas-server/logs/import-hive.log
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.2.0-205/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.4.0-91/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
2018-11-01T14:42:55,450 INFO [main] org.apache.atlas.ApplicationProperties - Looking for atlas-application.properties in classpath
2018-11-01T14:42:55,452 INFO [main] org.apache.atlas.ApplicationProperties - Looking for /atlas-application.properties in classpath
2018-11-01T14:42:55,453 INFO [main] org.apache.atlas.ApplicationProperties - Loading atlas-application.properties from null
Exception in thread "main" org.apache.atlas.hook.AtlasHookException: HiveMetaStoreBridge.main() failed.
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:650)
Caused by: org.apache.atlas.AtlasException: Failed to load application properties
at org.apache.atlas.ApplicationProperties.get(ApplicationProperties.java:97)
at org.apache.atlas.ApplicationProperties.get(ApplicationProperties.java:64)
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:622)
Caused by: org.apache.commons.configuration.ConfigurationException: Cannot locate configuration source null
at org.apache.commons.configuration.AbstractFileConfiguration.load(AbstractFileConfiguration.java:217)
at org.apache.commons.configuration.AbstractFileConfiguration.load(AbstractFileConfiguration.java:197)
at org.apache.commons.configuration.AbstractFileConfiguration.<init>(AbstractFileConfiguration.java:181)
at org.apache.commons.configuration.PropertiesConfiguration.<init>(PropertiesConfiguration.java:269)
at org.apache.atlas.ApplicationProperties.<init>(ApplicationProperties.java:47)
at org.apache.atlas.ApplicationProperties.get(ApplicationProperties.java:93)
... 2 more
Failed to import Hive Data Model!!!
... View more
Labels:
10-10-2018
09:45 AM
zeppeline-error.png Hi Team, I am new to Zeppelin and we are getting permission denied error while trying to run R code to read a csv file from zeppelin. Please check the attached image. We have changed the permission file to 777 also changed the ownership to user level but still getting same permission denied. Kindly guide us with the same.
... View more
Labels:
09-26-2018
10:02 AM
We have deployed hdp 2.6.4 production cluster successfully and kerberized it with Local AD , Now we are asked to enabled delegation domain between local AD and Corporate AD , so users can by synced automatically in local AD. Any help here will be appreciated.
... View more
Labels:
09-24-2018
12:40 PM
@Jay Kumar SenSharma WE have tried to install hdp 2.6.4 (current version is hdp 2.6.2) but its not installed correctly due to some reasons, now we need to remove it completely and then reinstall again.
... View more
09-24-2018
11:26 AM
upgrade264.png Hi, I have tried upgrading the hdp 2.6.2 cluster to 2.6.4 using the vdf file, but now i am unable to delet the 2.6.4 version from my list, 2.6.4 has been installed on some of my hosts but when i go to ambari version it is showing 2.6.2 as my current version. Regards Asif
... View more
09-17-2018
01:38 PM
Hi , I have successfully enabled ssl and ldap with nifi in hdp cluster through (integrating hdf components to hdp using mpack) The Challenge i am facing is the authentication is success for nifiadmin and all users in ldap but receive this error once we are logged in "Unable to view the user interface. Contact the system administrator.". Ranger has been enabled for authorization and i have created policies for complete access for nifiadmin user but still the issue remains.
... View more
Labels:
08-24-2018
01:27 PM
@Akhil S Naik I tried to analyse but unable to understand which file exist, because i m deleting mpacks , hdf and all hdfs new services from common services from /var/lib/ambari-server/commonservices.
... View more
08-24-2018
01:27 PM
@Akhil S Naik I tried to analyse but unable to understand which file exist, because i m deleting mpacks , hdf and all hdfs new services from common services from /var/lib/ambari-server/commonservices.
... View more
08-24-2018
10:51 AM
Tried again with HDF mpack 3.0 version , but still getting the same "File Exist" Error, i have deleted everything as below :
rm -rf /var/lib/ambari-server/resources/common-services/NIFI rm -rf /var/lib/ambari-server/resources/common-services/REGISTRY rm -rf /var/lib/ambari-server/resources/common-services/STREAMLINE rm -rf /var/lib/ambari-server/resources/mpacks /var/lib/ambari-server/resources/stacks/HDF Run the the command to install-mpacks for HDF. Error as below : [root@sidchadoop01 tmp]# ambari-server install-mpack \
> --mpack=/tmp/hdf-ambari-mpack-3.0.1.0-43.tar.gz \
> --verbose
Using python /usr/bin/python
Installing management pack
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Installing management pack /tmp/hdf-ambari-mpack-3.0.1.0-43.tar.gz
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Download management pack to temp location /var/lib/ambari-server/data/tmp/hdf-ambari-mpack-3.0.1.0-43.tar.gz
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Expand management pack at temp location /var/lib/ambari-server/data/tmp/hdf-ambari-mpack-3.0.1.0-43/
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Stage management pack hdf-ambari-mpack-3.0.1.0-43 to staging location /var/lib/ambari-server/resources/mpacks/hdf-ambari-mpack-3.0.1.0-43
INFO: Processing artifact hdf-service-definitions of type service-definitions in /var/lib/ambari-server/resources/mpacks/hdf-ambari-mpack-3.0.1.0-43/common-services
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Symlink: /var/lib/ambari-server/resources/common-services/NIFI/1.0.0
INFO: Symlink: /var/lib/ambari-server/resources/common-services/NIFI/1.1.0
INFO: Symlink: /var/lib/ambari-server/resources/common-services/NIFI/1.2.0
Traceback (most recent call last):
File "/usr/sbin/ambari-server.py", line 952, in <module>
mainBody()
File "/usr/sbin/ambari-server.py", line 922, in mainBody
main(options, args, parser)
File "/usr/sbin/ambari-server.py", line 874, in main
action_obj.execute()
File "/usr/sbin/ambari-server.py", line 78, in execute
self.fn(*self.args, **self.kwargs)
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 896, in install_mpack
(mpack_metadata, mpack_name, mpack_version, mpack_staging_dir, mpack_archive_path) = _install_mpack(options, replay_mode)
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 792, in _install_mpack
process_service_definitions_artifact(artifact, artifact_source_dir, options)
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 515, in process_service_definitions_artifact
create_symlink(src_service_definitions_dir, dest_service_definitions_dir, file, options.force)
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 235, in create_symlink
create_symlink_using_path(src_path, dest_link, force)
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 247, in create_symlink_using_path
sudo.symlink(src_path, dest_link)
File "/usr/lib/python2.6/site-packages/resource_management/core/sudo.py", line 123, in symlink
os.symlink(source, link_name)
OSError: [Errno 17] File exists
... View more
08-21-2018
09:44 AM
INFO: Processing artifact hdp-addon-services of type stack-addon-service-definitions in /var/lib/ambari-server/resources/mpacks/hdf-ambari-mpack-3.1.1.0-35/hdp-addon-services/HDF/3.0
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
Traceback (most recent call last):
File "/usr/sbin/ambari-server.py", line 952, in <module>
mainBody()
File "/usr/sbin/ambari-server.py", line 922, in mainBody
main(options, args, parser)
File "/usr/sbin/ambari-server.py", line 874, in main
action_obj.execute()
File "/usr/sbin/ambari-server.py", line 78, in execute
self.fn(*self.args, **self.kwargs)
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 896, in install_mpack
(mpack_metadata, mpack_name, mpack_version, mpack_staging_dir, mpack_archive_path) = _install_mpack(options, replay_mode)
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 794, in _install_mpack
process_stack_addon_service_definitions_artifact(artifact, artifact_source_dir, options)
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 554, in process_stack_addon_service_definitions_artifact
sudo.symlink(source_service_version_path, dest_link)
File "/usr/lib/python2.6/site-packages/resource_management/core/sudo.py", line 123, in symlink
os.symlink(source, link_name)
OSError: [Errno 17] File exists I also tried one of the solutions as below , but it didnt work. Cause: This issue occurs when the Symlink for NiFi and other HDF related services are present in the Resource Directories. Solution: To resolve this issue, do the following:
Remove (preferred to backup) the directories which was created while installing HDF-mpacks using the following command: rm -rf /var/lib/ambari-server/resources/common-services/NIFI rm -rf /var/lib/ambari-server/resources/common-services/REGISTRY rm -rf /var/lib/ambari-server/resources/common-services/STREAMLINE rm -rf /var/lib/ambari-server/resources/mpacks /var/lib/ambari-server/resources/stacks/HDF
Run the the command to install-mpacks for HDF.
... View more
08-08-2018
08:35 AM
Need to know about ReplAdmin, Service Admin, Lock level access in Ranger for hive, can someone give me details on this also please provide some example for the same.
... View more
Labels:
07-16-2018
06:11 AM
Sqoop job fails when try to run using oozie, it works well using command line. Error as below : 2018-07-16 11:19:07,280 WARN ShellActionExecutor:523 - SERVER[Server_hostname] USER[asif] GROUP[-] TOKEN[] APP[auto_ingest] JOB[0000007-180713182341268-oozie-oozi-W] ACTION[0000007-180713182341268-oozie-oozi-W@shell_2] Launcher ERROR, reason: Main class [org.apache.oozie.action.hadoop.ShellMain], exit code [1]
... View more
Labels: