1973
Posts
1225
Kudos Received
124
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1999 | 04-03-2024 06:39 AM | |
| 3174 | 01-12-2024 08:19 AM | |
| 1727 | 12-07-2023 01:49 PM | |
| 2506 | 08-02-2023 07:30 AM | |
| 3517 | 03-29-2023 01:22 PM |
11-03-2016
01:47 AM
NFO] --- maven-install-plugin:2.5.2:install (default-install) @ nifi-assembly ---
[INFO] Installing /Volumes/Transcend/projects/nifi/nifi-assembly/pom.xml to /Users/tspann/.m2/repository/org/apache/nifi/nifi-assembly/1.1.0-SNAPSHOT/nifi-assembly-1.1.0-SNAPSHOT.pom
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] nifi ............................................... SUCCESS [ 0.947 s]
[INFO] nifi-api ........................................... SUCCESS [ 18.126 s]
[INFO] nifi-framework-api ................................. SUCCESS [ 12.108 s]
[INFO] nifi-commons ....................................... SUCCESS [ 0.539 s]
[INFO] nifi-security-utils ................................ SUCCESS [ 12.961 s]
[INFO] nifi-utils ......................................... SUCCESS [ 26.998 s]
[INFO] nifi-data-provenance-utils ......................... SUCCESS [ 6.681 s]
[INFO] nifi-flowfile-packager ............................. SUCCESS [ 19.166 s]
[INFO] nifi-expression-language ........................... SUCCESS [ 49.708 s]
[INFO] nifi-logging-utils ................................. SUCCESS [ 11.279 s]
[INFO] nifi-properties .................................... SUCCESS [ 19.457 s]
[INFO] nifi-socket-utils .................................. SUCCESS [ 11.359 s]
[INFO] nifi-web-utils ..................................... SUCCESS [ 9.211 s]
[INFO] nifi-nar-bundles ................................... SUCCESS [ 0.423 s]
[INFO] nifi-standard-services ............................. SUCCESS [ 0.191 s]
[INFO] nifi-ssl-context-service-api ....................... SUCCESS [ 4.366 s]
[INFO] nifi-processor-utils ............................... SUCCESS [ 15.559 s]
[INFO] nifi-write-ahead-log ............................... SUCCESS [ 37.596 s]
[INFO] nifi-framework-bundle .............................. SUCCESS [ 0.186 s]
[INFO] nifi-framework ..................................... SUCCESS [ 8.725 s]
[INFO] nifi-client-dto .................................... SUCCESS [ 9.048 s]
[INFO] nifi-site-to-site-client ........................... SUCCESS [ 43.232 s]
[INFO] nifi-hl7-query-language ............................ SUCCESS [ 32.272 s]
[INFO] nifi-hadoop-utils .................................. SUCCESS [ 16.190 s]
[INFO] nifi-bootstrap ..................................... SUCCESS [ 4.862 s]
[INFO] nifi-mock .......................................... SUCCESS [ 10.900 s]
[INFO] nifi-nar-utils ..................................... SUCCESS [ 5.801 s]
[INFO] nifi-framework-authorization ....................... SUCCESS [ 2.918 s]
[INFO] nifi-framework-core-api ............................ SUCCESS [ 6.668 s]
[INFO] NiFi Properties Loader ............................. SUCCESS [ 5.558 s]
[INFO] nifi-documentation ................................. SUCCESS [ 11.866 s]
[INFO] nifi-runtime ....................................... SUCCESS [ 5.656 s]
[INFO] nifi-security ...................................... SUCCESS [ 8.986 s]
[INFO] nifi-user-actions .................................. SUCCESS [ 3.050 s]
[INFO] nifi-administration ................................ SUCCESS [ 8.153 s]
[INFO] nifi-site-to-site .................................. SUCCESS [ 31.641 s]
[INFO] nifi-framework-cluster-protocol .................... SUCCESS [ 21.261 s]
[INFO] nifi-framework-core ................................ SUCCESS [03:23 min]
[INFO] nifi-web ........................................... SUCCESS [ 1.357 s]
[INFO] nifi-web-security .................................. SUCCESS [ 22.575 s]
[INFO] nifi-web-optimistic-locking ........................ SUCCESS [ 2.002 s]
[INFO] nifi-framework-cluster ............................. SUCCESS [ 12.125 s]
[INFO] nifi-file-authorizer ............................... SUCCESS [ 9.513 s]
[INFO] nifi-custom-ui-utilities ........................... SUCCESS [ 3.049 s]
[INFO] nifi-web-content-access ............................ SUCCESS [ 7.045 s]
[INFO] nifi-ui-extension .................................. SUCCESS [ 1.506 s]
[INFO] nifi-authorizer .................................... SUCCESS [ 5.066 s]
[INFO] nifi-provenance-repository-bundle .................. SUCCESS [ 0.116 s]
[INFO] nifi-volatile-provenance-repository ................ SUCCESS [ 9.860 s]
[INFO] nifi-web-api It worked for me export MAVEN_OPTS="-Xms1024m -Xmx3076m -XX:MaxPermSize=256m" mvn -T C2.0 clean install Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T11:41:47-05:00)
Maven home: /Users/tspann/.sdkman/candidates/maven/current
Java version: 1.8.0_91, vendor: Oracle Corporation
Java home: /Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "mac os x", version: "10.12.1", arch: "x86_64", family: "mac"
... View more
11-03-2016
01:43 AM
2 Kudos
There are tools that let you join disjoint tables like SAP HANA Vora, Microsoft Polybase. What features of the join do you want? If you just want one really wide row 1. NiFi: 1 ExecuteSQL for teradata and one for DB2 (use the same # of fields and alias them with the same names. Then https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.MergeContent/index.html 2. SparkSQL: see https://community.hortonworks.com/repos/29883/sparksql-data-federation-demo.html 3. You could join them in Java code with two JDBC connections and some funky coding. 4. Ingest the teradata into HDFS with that ExecuteSQL query, Ingest the DB2 data into HDFS with ExecuteSQL query, then right a Hive query on top of that and run a third ExecuteSQL query. The advantage is you now have a datalake. Most people just load those legacy data sources into a Hadoop datalake for analytics. And write you queries on that. 5. If you want to do this more real-time, push those two datasources into an in-memory datastore like Redis, Geode, Ignite and then query that. 6. Send the data to Kafka and let a Spark app combine them. 7. Push them all to HBase and do Phoenix queries 8. Try Apache Drill, https://drill.apache.org/docs/using-sql-functions-clauses-and-joins/
... View more
11-03-2016
12:24 AM
1 Kudo
did you do a full mvn clean? do you have JDK 1.8 and maven 3.1 installed? brew install protobuf this is a helpful tool http://sdkman.io/
... View more
11-03-2016
12:19 AM
make sure the server is actually stopped, something else is running on that port do a ps -ef and see if it's still running as a zombie and kill -9 it If you can hard reboot the server that would help.
... View more
11-03-2016
12:03 AM
See these: https://community.hortonworks.com/questions/4345/querying-json-data-using-hive.html https://community.hortonworks.com/questions/28684/creating-a-hive-table-with-orgapachehcatalogdatajs.html http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_data-access/content/moving_data_from_hdfs_to_hive_external_table_method.html
... View more
11-02-2016
06:13 PM
2 Kudos
Is iptables running? Can you get to that host? The port seems odd. Make sure there's no firewall, Knox, proxy or gateway blocking that port. Restart the client and if you can the Hive Server 2. No Route to Host You get a TCP No Route To Host Error -often wrapped in a Java IOException, when one machine on the network does not know how to send TCP packets to the machine specified. Some possible causes (not an exclusive list):
The hostname of the remote machine is wrong in the configuration files The client's host table /etc/hosts has an invalid IPAddress for the target host. The DNS server's host table has an invalid IPAddress for the target host. The client's routing tables (In Linux, iptables) are wrong. The DHCP server is publishing bad routing information. Client and server are on different subnets, and are not set up to talk to each other. This may be an accident, or it is to deliberately lock down the Hadoop cluster. The machines are trying to communicate using IPv6. Hadoop does not currently support IPv6 The host's IP address has changed but a long-lived JVM is caching the old value. This is a known problem with JVMs (search for "java negative DNS caching" for the details and solutions). The quick solution: restart the JVMs These are all network configuration/router issues. As it is your network, only you can find out and track down the problem.
... View more
11-02-2016
06:09 PM
it's on a table by table basis, Hive will just add it to the list of available compression algos available. It is not trying to use it, just make it available if you need it. Did you run from Beeline or just hive cli? Make sure the LZO jar is in your path In Hive CLI
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Cli Use the add jar syntax to point to that lzo jar then run your query you can also make sure Hive, Hive Thrift, HDFS and other tools that may be referencing that are out of memory and restarted Make sure settings are done on all nodes. Without knowing your full environment, how it was setup / directory structure / cluster structure, it is very difficult to troubleshoot. If things were setup in a non-standard way it can be difficult to find things in the PATH that maybe needed for Hive.
... View more
11-02-2016
05:22 PM
you need the configuration and you also need to make sure the LZO software is installed: RHEL/CentOS/Oracle Linux: yum install lzo lzo-devel hadooplzo hadooplzo-native For SLES: zypper install lzo lzo-devel hadooplzo hadooplzo-native For Ubuntu/Debian: HDP support for Debian 6 is deprecated with HDP 2.4.2. Future versions of HDP will no longer be supported on Debian 6. apt-get install liblzo2-2 liblzo2-dev hadooplzo http://dev.hortonworks.com.s3.amazonaws.com/HDPDocuments/Ambari-1.6.0.0/bk_ambari_reference/content/ambari-ref-lzo-configure.html https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.1/bk_ambari_reference_guide/content/_configure_core-sitexml_for_lzo.html http://dev.hortonworks.com.s3.amazonaws.com/HDPDocuments/Ambari-1.6.0.0/bk_ambari_reference/content/ambari-ref-lzo-hive-queries.html You must Stop, then Start the HDFS service for Ambari to install the necessary LZO packages. Performing a Restart or a Restart All will not start the required package install.
... View more
11-02-2016
03:06 PM
restart the servers is this installed through Ambari? What version are you running? Any other messages in the logs after you restarted.
... View more
11-02-2016
03:03 PM
For manually installing LZO https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.3/bk_installing_manually_book/content/install_compression_libraries.html 3.2. Install LZO Execute the following command at all the nodes in your cluster:
RHEL/CentOS/Oracle Linux: yum install lzo lzo-devel hadooplzo hadooplzo-native For SLES: zypper install lzo lzo-devel hadooplzo hadooplzo-native For Ubuntu/Debian: HDP support for Debian 6 is deprecated with HDP 2.4.2. Future versions of HDP will no longer be supported on Debian 6. apt-get install liblzo2-2 liblzo2-dev hadooplzo If you install with Ambari, you don't need to manually install LZO that should be done by the wizard.
... View more