Member since
02-27-2017
9
Posts
1
Kudos Received
0
Solutions
07-04-2019
10:17 AM
1 Kudo
I was facing the same issue after installation of a fresh cluster. The Metastore server wouldn't start and hang at executing "yarn rmadmin -refreshSuperUserGroupsConfiguration" The cause was that the master node on which the Hive MetaStore was installed, was not the same as the node where Yarn ResourceManager was located. Therefore Yarn client and yarn-site.xml were not automatically available for successfully connect to the ResourceManager on another host. By just adding Yarn Client to the host where Metastore Server was installed, the problem was resolved. The reason why adding a second ResourceManager resolved the issue in @Guozhen Li's case, was probably because the second ResourceManager was installed on the same node where the Hive Metastore was located. Therefore addition of the Yarn client and the yarn-site.xml file remedied the problem.
... View more
05-10-2019
09:15 AM
For deployment of HDF on a fresh HDP production cluster (total size < 100), will the HDF services such as Kafka, Storm and Nifi use the HDP Zookeeper cluster, or a separate dedicated Zookeeper quorum must be installed for the HDF services ? If the same Zookeeper quorum is used, is 3 ZK nodes enough or should I consider 5 ZK nodes for a 60-node cluster (combined) ?
... View more
09-13-2017
02:24 PM
Hi @Andrew Lim I tried both, but the new version is still not appearing.
... View more
09-13-2017
09:21 AM
I'm a bit confused about how to upgrade my HDF stack from 3.0.1.0 to the recent patch release 3.0.1.1. The Ambari itself is already the latest version (2.5.1) I've already upgraded the Management Pack on the Amabri server node to the latest version (3.0.1.1). But I don't see an entry for HDF-3.0.1.1 version from the dropdown list on the Admin -> Versions page. How should I register the new version to be able to do this upgrade? Do I need to upload a "Version Definition File"?
... View more
Labels:
07-03-2017
01:38 PM
I've run into a bug when upgrading the Ambari from 2.4 to 2.5. I followed all the recommended step until I got to upgrading the Ambari Server database schema using the "ambari-server upgrade" command which failed with the following error due to a known issue when the Lucidworks HDPSearch component is previously added to Ambari: $> ambari-server upgrade -v
Using python /usr/bin/python
Upgrading ambari-server
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Upgrade Ambari Server
INFO: Updating Ambari Server properties in ambari.properties ...
WARNING: Can not find ambari.properties.rpmsave file from previous version, skipping import of settings
INFO: Updating Ambari Server properties in ambari-env.sh ...
INFO: Can not find ambari-env.sh.rpmsave file from previous version, skipping restore of environment settings. ambari-env.sh may not include any user customization.
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: ===========================================================================================
INFO: Executing Mpack Replay Log :
INFO: {'purge': False, 'mpack_command': 'install-mpack', 'mpack_path': '/var/lib/ambari-server/resources/mpacks/cache/solr-service-mpack-5.5.2.2.5.tar.gz', 'force': False, 'verbose': False}
INFO: ===========================================================================================
INFO: Installing management pack /var/lib/ambari-server/resources/mpacks/cache/solr-service-mpack-5.5.2.2.5.tar.gz
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Download management pack to temp location /var/lib/ambari-server/data/tmp/solr-service-mpack-5.5.2.2.5.tar.gz
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Expand management pack at temp location /var/lib/ambari-server/data/tmp/solr-service
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Stage management pack solr-ambari-mpack-5.5.2.2.5 to staging location /var/lib/ambari-server/resources/mpacks/solr-ambari-mpack-5.5.2.2.5
INFO: Force removing previously installed management pack from /var/lib/ambari-server/resources/mpacks/solr-ambari-mpack-5.5.2.2.5
INFO: Processing artifact SOLR-common-services of type service-definitions in /var/lib/ambari-server/resources/mpacks/solr-ambari-mpack-5.5.2.2.5/common-services
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
Traceback (most recent call last):
File "/usr/sbin/ambari-server.py", line 941, in <module>
mainBody()
File "/usr/sbin/ambari-server.py", line 911, in mainBody
main(options, args, parser)
File "/usr/sbin/ambari-server.py", line 863, in main
action_obj.execute()
File "/usr/sbin/ambari-server.py", line 78, in execute
self.fn(*self.args, **self.kwargs)
File "/usr/lib/python2.6/site-packages/ambari_server/serverUpgrade.py", line 363, in upgrade
replay_mpack_logs()
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 983, in replay_mpack_logs
install_mpack(replay_options, replay_mode=True)
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 896, in install_mpack
(mpack_metadata, mpack_name, mpack_version, mpack_staging_dir, mpack_archive_path) = _install_mpack(options, replay_mode)
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 792, in _install_mpack
process_service_definitions_artifact(artifact, artifact_source_dir, options)
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 515, in process_service_definitions_artifact
create_symlink(src_service_definitions_dir, dest_service_definitions_dir, file, options.force)
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 235, in create_symlink
create_symlink_using_path(src_path, dest_link, force)
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 247, in create_symlink_using_path
sudo.symlink(src_path, dest_link)
File "/usr/lib/python2.6/site-packages/resource_management/core/sudo.py", line 123, in symlink
os.symlink(source, link_name)
OSError: [Errno 17] File exists
This issue is logged but the fix is only availble in the next release! : https://issues.apache.org/jira/browse/AMBARI-21263 Trying to rectify this, I uninstalled the Lucidworks HDPSearch from the cluster but that didn't help either. Now what should I do? How should I either resolve this upgrade issue or at least rollback to 2.4 without messing up ?
... View more
03-22-2017
10:03 AM
No, I'm only using Primary node for those type of processors.
... View more
03-22-2017
09:07 AM
I tried that but it doesn't process the flowfile. It also gives the following warning: ConvertJSONToAvro[id=8aec4759-eae0-1dab-ffff-ffff9b831c59] Failed to convert 1/1 records from JSON to Avro
... View more
03-17-2017
08:51 AM
I have a JSON file containing an array of records and I'm trying to convert the entire JSON to Avro file using the ConvertJSONToAvro processor. JSON sample: [
{
"id": 123,
"title": "foo"
},
{
"id": 345,
"title": "bar"
}
]
Avro Schema: {
"name": "test",
"type": "array",
"items": {
"type": "record",
"name": "user",
"fields": [
{
"name": "id",
"type": "int"
},
{
"name": "title",
"type": "string"
}
]
}
}
However using the above Avro schema the processor throws an exception: java.lang.IllegalArgumentException: Schemas for JSON files should be record
at org.kitesdk.shaded.com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) ~[kite-data-core-1.0.0.jar:na]
at org.kitesdk.data.spi.filesystem.JSONFileReader.initialize(JSONFileReader.java:84) ~[kite-data-core-1.0.0.jar:na]
at org.apache.nifi.processors.kite.ConvertJSONToAvro$1.process(ConvertJSONToAvro.java:144) ~[nifi-kite-processors-1.1.0.2.1.1.0-2.jar:1.1.0.2.1.1.0-2]
at org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2578) ~[nifi-framework-core-1.1.0.2.1.1.0-2.jar:1.1.0.2.1.1.0-2]
at org.apache.nifi.processors.kite.ConvertJSONToAvro.onTrigger(ConvertJSONToAvro.java:139) ~[nifi-kite-processors-1.1.0.2.1.1.0-2.jar:1.1.0.2.1.1.0-2]
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) ~[nifi-api-1.1.0.2.1.1.0-2.jar:1.1.0.2.1.1.0-2]
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) [nifi-framework-core-1.1.0.2.1.1.0-2.jar:1.1.0.2.1.1.0-2]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.1.0.2.1.1.0-2.jar:1.1.0.2.1.1.0-2]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.1.0.2.1.1.0-2.jar:1.1.0.2.1.1.0-2]
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.1.0.2.1.1.0-2.jar:1.1.0.2.1.1.0-2]
Does this mean the ConvertJsonToAvro processor cannot convert an array of records, and I have to split the JSON file before feeding the records to this process? It seems like it doesn't recognise at "type": "array" at the root of the schema.
... View more
Labels:
02-27-2017
09:15 AM
I'm using QueryDatabaseTable on a 3-node HDF/Nifi cluster. What's happening is that once the process starts it simultaneously fetches three flowfiles containing three identical copies of the records, therefore causing three duplicates of each record being fetched. I'm suspecting that each node fetches it's own records at the same time without coordination between the nodes. To test if this is the case I changed the configuration of the processor on the SCHEDULING tab, by changing the Execution value from "All nodes" to "Primary node". After applying this change the issue was resolved and only one copy of each record is fetched. Is this a bug in Nifi or is this a normal behaviour? what If I need all nodes to participate in fetching records from the database and not overload the primary node? Thanks
... View more
Labels: