Member since
10-14-2016
32
Posts
1
Kudos Received
0
Solutions
11-23-2018
02:16 PM
I had limited resource on the yarn queue that I used for my spark job.
... View more
11-22-2018
01:59 PM
Hello All, I try to run a spark 2 job with oozie on HDP 2.6.5. Oozie starts successful, but my spark 2 job keeps in running state. on Resource manager UI, I got
Application Type:
SPARK
Application Tags:
oozie-7bf4f1cfb7c848de2db37230975b4450
Application Priority:
0 (Higher Integer value indicates higher priority)
YarnApplicationState:
ACCEPTED: waiting for AM container to be allocated, launched and register
with RM.
Kindly suggest.
... View more
Labels:
11-12-2018
01:18 PM
Hello Ivan, I have the same problem with HDP 2.6.4. Is it solved for you ? Regards, Chris
... View more
03-29-2018
06:40 AM
A there ways to solve this problem. Is it possible to compile the PutHDFS code with a new version of Hadoop code ?
... View more
12-20-2017
11:09 AM
I also have these errors and not only with hive-webhcat, but also with other components
... View more
12-19-2017
06:19 AM
@Geoffrey Shelton Okot Ambari 2.6 is installed. yum repolist gives repo id repo name status
!HDF-3.0 HDF-3.0 45
!HDP-2.6 HDP-2.6 232
!HDP-2.6.0.3 HDP-2.6.0.3 232
!HDP-SOLR-2.6-100 HDP-SOLR-2.6-100 1
!HDP-UTILS-1.1.0.21 HDP-UTILS-1.1.0.21 64
!HDP-UTILS-2.6.0.3 HDP-UTILS-2.6.0.3 64
!ambari-2.6.0.0 ambari Version - ambari-2.6.0.0 12
.....
... View more
12-15-2017
11:12 AM
Hello All, I try to upgrade ambari server from 2.5 to 2.6. When I execute the command "ambari-server upgrade" I get the follow error org.apache.ambari.server.AmbariException: Unable to find any CURRENT repositories.
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:203)
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418)
Caused by: org.apache.ambari.server.AmbariException: Unable to find any CURRENT repositories.
at org.apache.ambari.server.upgrade.UpgradeCatalog260.getCurrentVersionID(UpgradeCatalog260.java:510)
at org.apache.ambari.server.upgrade.UpgradeCatalog260.executeDDLUpdates(UpgradeCatalog260.java:194)
at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200)
... 1 more Any ideas how I can fix this error ?
... View more
Labels:
- Labels:
-
Apache Ambari
12-11-2017
06:33 AM
Now I get this error 11 Dec 2017 07:26:06,379 ERROR [main] SchemaUpgradeHelper:437 - Exception occurred during upgrade, failed
org.apache.ambari.server.AmbariException: Unable to find any CURRENT repositories.
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:203)
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418)
Caused by: org.apache.ambari.server.AmbariException: Unable to find any CURRENT repositories.
at org.apache.ambari.server.upgrade.UpgradeCatalog260.getCurrentVersionID(UpgradeCatalog260.java:510)
at org.apache.ambari.server.upgrade.UpgradeCatalog260.executeDDLUpdates(UpgradeCatalog260.java:194)
at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200)
... 1 more
... View more
12-08-2017
06:35 AM
Hello All, I try to upgrade ambari server from 2.5 to 2.6. When I execute the command "ambari-server upgrade" I get the follow error Using python /usr/bin/python
Upgrading ambari-server
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Upgrade Ambari Server
INFO: Updating Ambari Server properties in ambari.properties ...
WARNING: Can not find ambari.properties.rpmsave file from previous version, skipping import of settings
INFO: Updating Ambari Server properties in ambari-env.sh ...
INFO: Can not find ambari-env.sh.rpmsave file from previous version, skipping restore of environment settings. ambari-env.sh may not include any user customization.
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: ===========================================================================================
INFO: Executing Mpack Replay Log :
INFO: {'purge': False, 'mpack_command': 'install-mpack', 'mpack_path': '/var/lib/ambari-server/resources/mpacks/cache/solr-service-mpack-5.5.2.2.5.tar.gz', 'force': False, 'verbose': False}
INFO: ===========================================================================================
INFO: Installing management pack /var/lib/ambari-server/resources/mpacks/cache/solr-service-mpack-5.5.2.2.5.tar.gz
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Download management pack to temp location /var/lib/ambari-server/data/tmp/solr-service-mpack-5.5.2.2.5.tar.gz
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Expand management pack at temp location /var/lib/ambari-server/data/tmp/solr-service
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Stage management pack solr-ambari-mpack-5.5.2.2.5 to staging location /var/lib/ambari-server/resources/mpacks/solr-ambari-mpack-5.5.2.2.5
INFO: Force removing previously installed management pack from /var/lib/ambari-server/resources/mpacks/solr-ambari-mpack-5.5.2.2.5
INFO: Processing artifact SOLR-common-services of type service-definitions in /var/lib/ambari-server/resources/mpacks/solr-ambari-mpack-5.5.2.2.5/common-services
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
Traceback (most recent call last):
File "/usr/sbin/ambari-server.py", line 950, in <module>
mainBody()
File "/usr/sbin/ambari-server.py", line 920, in mainBody
main(options, args, parser)
File "/usr/sbin/ambari-server.py", line 872, in main
action_obj.execute()
File "/usr/sbin/ambari-server.py", line 78, in execute
self.fn(*self.args, **self.kwargs)
File "/usr/lib/python2.6/site-packages/ambari_server/serverUpgrade.py", line 226, in upgrade
replay_mpack_logs()
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 983, in replay_mpack_logs
install_mpack(replay_options, replay_mode=True)
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 896, in install_mpack
(mpack_metadata, mpack_name, mpack_version, mpack_staging_dir, mpack_archive_path) = _install_mpack(options, replay_mode)
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 792, in _install_mpack
process_service_definitions_artifact(artifact, artifact_source_dir, options)
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 515, in process_service_definitions_artifact
create_symlink(src_service_definitions_dir, dest_service_definitions_dir, file, options.force)
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 235, in create_symlink
create_symlink_using_path(src_path, dest_link, force)
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 247, in create_symlink_using_path
sudo.symlink(src_path, dest_link)
File "/usr/lib/python2.6/site-packages/resource_management/core/sudo.py", line 123, in symlink
os.symlink(source, link_name)
OSError: [Errno 17] File exists
Any ideas how I can fix this error ?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Solr
10-13-2017
05:15 AM
If I configure the processor for netcat then it works fine. The problem is if I want to use it with flume of type=avro
... View more
10-12-2017
01:19 PM
Hello All I would like to use ExecuteFlumeSource Processor. I have a server that sends files using the Avro client of flume. I want to reads these files into NIFI. Has somebody already try to do this with NIFI 1.3. I get an error in NIFI: 2017-09-21 15:51:41,862 ERROR
[Timer-Driven Process Thread-10] o.a.n.p.flume.ExecuteFlumeSource
ExecuteFlumeSource[id=64d53840-bec5-1945-8c16-b4710b5f37a7]
ExecuteFlumeSource[id=64d53840-bec5-1945-8c16-b4710b5f37a7] failed to process
session due to java.lang.AbstractMethodError:
org.apache.avro.specific.SpecificFixed.getSchema()Lorg/apache/avro/Schema;: {} java.lang.AbstractMethodError:
org.apache.avro.specific.SpecificFixed.getSchema()Lorg/apache/avro/Schema;
at org.apache.avro.specific.SpecificFixed.<init>(SpecificFixed.java:36)
at org.apache.avro.ipc.MD5.<init>(MD5.java:16)
at org.apache.avro.ipc.Responder.<init>(Responder.java:73)
at
org.apache.avro.ipc.generic.GenericResponder.<init>(GenericResponder.java:45)
at
org.apache.avro.ipc.specific.SpecificResponder.<init>(SpecificResponder.java:55)
at
org.apache.avro.ipc.specific.SpecificResponder.<init>(SpecificResponder.java:51)
at org.apache.avro.ipc.specific.SpecificResponder.<init>(SpecificResponder.java:43)
at org.apache.flume.source.AvroSource.start(AvroSource.java:230) Regards, Chris
... View more
Labels:
- Labels:
-
Apache Flume
-
Apache NiFi
10-12-2017
12:50 PM
I have done this, but this gives performance issues. I found how to do this with the JoltTransform processor.
... View more
10-12-2017
10:59 AM
Hello All, I want to flatten json records using nifi. The records are store in kafka as AVRO. I use the ConsumeKafkaRecord processor to convert avro to json. For flattening I would like to use the JoltTransformJSON processor, but this doesn't work since it expect as input a single json and not an array of json records. Any ideas, how I can do this ? Ideally is that I don't have to convert the records to json, but it seems that currently there is no processor to flatten avro. Regards, Chris
... View more
Labels:
- Labels:
-
Apache NiFi
10-12-2017
07:53 AM
1 Kudo
Yes, you can use one schema for read and a different one for write
... View more
08-03-2017
11:20 AM
Hello All, Does Stream Analytics manager support spark streaming ?
... View more
Labels:
- Labels:
-
Apache Spark
-
Cloudera DataFlow (CDF)
05-23-2017
08:32 AM
Hello All, I want to configure a flume source processor using avro as flume source.
When I start the ExecuteFlumeSource, I don't get any errors, but I can't connect to the source port.
telnet localhost 5151 gives connection refused. The configuration is as follows
source type = avro
agent name = a1
source name = batchSource
Flume configuration
a1.sources.batchSource.type=avro
a1.sources.batchSource.bind=0.0.0.0
a1.sources.batchSource.port=5151 Using netcat as flume source works.
... View more
Labels:
- Labels:
-
Apache Flume
-
Apache NiFi
05-10-2017
05:07 AM
can you change outputStream.wrtie(text) with outputStream.write(text.encode('latin-1'))
... View more
05-09-2017
09:59 AM
Hello,
In the "process" method you can read for instance data from the inputstream, change some things and output it see https://community.hortonworks.com/content/kbentry/75032/executescript-cookbook-part-1.html for more information on executescript
text = IOUtils.toString(inputStream, IOUtils.toString(inputStream,StandardCharsets.ISO_8859_1)
...
outputStream.write(text)
... View more
04-19-2017
06:00 AM
Only with sparkR.
... View more
04-18-2017
11:30 AM
Hello , I try to run sparkR in zeppelin 0.7.0 on HDP 2.6 If I start sparkR I get an error without saying which error In the zeppelin log file I see the following: WARN [2017-04-18 10:54:24,981] ({pool-2-thread-2} NotebookServer.java[afterStatusChange]:2055) - Job 20150424-154226_261270952 is finished, status: ERROR, exception: null, result: I set the SPARK_HOME in zeppelin-env.sh to /usr/hdp/current/spark-client resp. /usr/hdp/current/spark2-client without success. R is installed on all the worker nodes Any ideas ?
... View more
Labels:
02-03-2017
10:06 AM
Hi, I try to use the executeScript Processor with python for converting strings with special characters like é . For encoding I use latin-1. The script is : text = IOUtils.toString(inputStream, StandardCharsets.ISO_8859_1)
....
outputStream.write(bytearray(out.encode('latin-1')))
using this I get the follow error: org.apache.nifi.processor.exception.ProcessException: javax.script.ScriptException: TypeError: write(): 1st arg can't be coerced to int, byte[] in <script> at line number 33 if I loop over the bytearray : text = IOUtils.toString(inputStream, StandardCharsets.ISO_8859_1)
....
out = bytearray(out.encode('latin-1'))
for o in out :
outputStream.write(o) I don't get this error. Thanks for your help Chris
... View more
Labels:
- Labels:
-
Apache NiFi
01-30-2017
06:54 AM
This solution doesn't work. In spark history server I see the new name, but not in Yarn Web UI
... View more
01-26-2017
02:17 PM
Hello, I run spark interpreter in isolated mode. For all the notebooks, spark.app.name is set to zeppelin. Hence in Yarn RM WebUI , the application name is zeppelin for all started notebooks. Is there a way to set spark.app.name different for each notebook ?
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache Zeppelin