Member since
02-10-2016
50
Posts
14
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1180 | 02-08-2017 05:53 AM | |
863 | 02-02-2017 11:39 AM | |
2326 | 01-27-2017 06:17 PM | |
1199 | 01-27-2017 04:43 PM | |
1629 | 01-27-2017 01:57 PM |
02-13-2017
08:52 AM
I'm running a simple flow on dockerized Nifi 1.1.1 which uses InvokeHTTP processor to load data from a https data source. I have setup a StandardSSLContextService where the keystore filename (/etc/ssl/certs/java/cacerts), password (the default 'changeme'), and type (JKS) is defined. The flow works if I use GetHTTP processor (and https endpoint), but when I switch to InvokeHTTP the following exception is thrown: 2017-02-13 08:48:44,748 ERROR [Timer-Driven Process Thread-7] o.a.nifi.processors.standard.InvokeHTTP InvokeHTTP[id=3352601e-015a-1000-20c6-39e7d9786866] Routing to Failure due to exception: javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No X509TrustManager implementation available: javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No X509TrustManager implementation available
2017-02-13 08:48:44,749 ERROR [Timer-Driven Process Thread-7] o.a.nifi.processors.standard.InvokeHTTP
javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No X509TrustManager implementation available
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192) ~[na:1.8.0_111]
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1949) ~[na:1.8.0_111]
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:302) ~[na:1.8.0_111]
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296) ~[na:1.8.0_111]
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1509) ~[na:1.8.0_111]
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216) ~[na:1.8.0_111]
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:979) ~[na:1.8.0_111]
at sun.security.ssl.Handshaker.process_record(Handshaker.java:914) ~[na:1.8.0_111]
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1062) ~[na:1.8.0_111]
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375) ~[na:1.8.0_111]
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403) ~[na:1.8.0_111]
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387) ~[na:1.8.0_111]
at com.squareup.okhttp.internal.io.RealConnection.connectTls(RealConnection.java:188) ~[okhttp-2.7.1.jar:na]
at com.squareup.okhttp.internal.io.RealConnection.connectSocket(RealConnection.java:145) ~[okhttp-2.7.1.jar:na]
at com.squareup.okhttp.internal.io.RealConnection.connect(RealConnection.java:108) ~[okhttp-2.7.1.jar:na]
at com.squareup.okhttp.internal.http.StreamAllocation.findConnection(StreamAllocation.java:184) ~[okhttp-2.7.1.jar:na]
at com.squareup.okhttp.internal.http.StreamAllocation.findHealthyConnection(StreamAllocation.java:126) ~[okhttp-2.7.1.jar:na]
at com.squareup.okhttp.internal.http.StreamAllocation.newStream(StreamAllocation.java:95) ~[okhttp-2.7.1.jar:na]
at com.squareup.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:283) ~[okhttp-2.7.1.jar:na]
at com.squareup.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:224) ~[okhttp-2.7.1.jar:na]
at com.squareup.okhttp.Call.getResponse(Call.java:286) ~[okhttp-2.7.1.jar:na]
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:243) ~[okhttp-2.7.1.jar:na]
at com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:205) ~[okhttp-2.7.1.jar:na]
at com.squareup.okhttp.Call.execute(Call.java:80) ~[okhttp-2.7.1.jar:na]
at org.apache.nifi.processors.standard.InvokeHTTP.onTrigger(InvokeHTTP.java:624) ~[nifi-standard-processors-1.1.1.jar:1.1.1]
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) [nifi-api-1.1.1.jar:1.1.1]
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) [nifi-framework-core-1.1.1.jar:1.1.1]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.1.1.jar:1.1.1]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.1.1.jar:1.1.1]
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.1.1.jar:1.1.1]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_111]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_111]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_111]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: java.security.cert.CertificateException: No X509TrustManager implementation available
at sun.security.ssl.DummyX509TrustManager.checkServerTrusted(SSLContextImpl.java:1119) ~[na:1.8.0_111]
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1491) ~[na:1.8.0_111]
... 32 common frames omitted
The same flow works well if I run it natively under OS X. Any idea / help is greatly appreciated!
... View more
Labels:
- Labels:
-
Apache NiFi
02-12-2017
01:30 PM
Repo Description Superset is a data exploration platform designed to be visual, intuitive and interactive. [this project used to be named Caravel, and Panoramix in the past] Screenshots & Gifs View Dashboards
View/Edit a Slice
Query and Visualize with SQL Lab
Superset Superset's main goal is to make it easy to slice, dice and visualize data. It empowers users to perform analytics at the speed of thought. Superset provides: A quick way to intuitively visualize datasets by allowing users to create and share interactive dashboards A rich set of visualizations to analyze your data, as well as a flexible way to extend the capabilities An extensible, high granularity security model allowing intricate rules on who can access which features, and integration with major authentication providers (database, OpenID, LDAP, OAuth & REMOTE_USER through Flask AppBuiler) A simple semantic layer, allowing to control how data sources are displayed in the UI, by defining which fields should show up in which dropdown and which aggregation and function (metrics) are made available to the user Deep integration with Druid allows for Superset to stay blazing fast while slicing and dicing large, realtime datasets Fast loading dashboards with configurable caching Repo Info Github Repo URL https://github.com/airbnb/superset Github account name airbnb Repo name superset
... View more
- Find more articles tagged with:
- Data Science & Advanced Analytics
- superset
- utilities
- visualization
02-12-2017
10:46 AM
Repo Description Thrill is an EXPERIMENTAL C++ framework for algorithmic distributed Big Data batch computations on a cluster of machines. It is currently being designed and developed as a research project at Karlsruhe Institute of Technology and is in early testing. More information on goals and mission see http://project-thrill.org. For easy steps on Getting Started refer to the Live Documentation. Repo Info Github Repo URL https://github.com/thrill/thrill/ Github account name thrill Repo name thrill/
... View more
- Find more articles tagged with:
- Data Processing
- data-processing
- utilities
02-11-2017
07:01 PM
This is an Ambari "feature". Every time you restart the service Ambari overrides the configuration with the configuration defined in Ambari.
To enable RAS you should go to Storm / Configuration / Custom storm-site then click on "Add property ...".
Then you'll be able to add your desired RAS setting:
storm.scheduler: “org.apache.storm.scheduler.resource.ResourceAwareScheduler”
After setting the param you should restart all the Storm services.
... View more
02-11-2017
01:21 PM
1 Kudo
Repo Description The Apache Ignite In-Memory Data Fabric is a high-performance, integrated and distributed in-memory platform for computing and transacting on large-scale data sets in real-time, orders of magnitude faster than possible with traditional disk-based or flash technologies. Apache Ignite In-Memory Data Fabric is designed to deliver uncompromised performance for a wide set of in-memory computing use cases from high performance computing, to the industry most advanced data grid, highly available service grid, and streaming. Advanced Clustering Ignite nodes can automatically discover each other. This helps to scale the cluster when needed, without having to restart the whole cluster. Developers can also leverage from Ignite’s hybrid cloud support that allows establishing connection between private cloud and public clouds such as Amazon Web Services, providing them with best of both worlds. Data Grid (JCache) Ignite data grid is an in-memory distributed key-value store which can be viewed as a distributed partitioned hash map, with every cluster node owning a portion of the overall data. This way the more cluster nodes we add, the more data we can cache. Unlike other key-value stores, Ignite determines data locality using a pluggable hashing algorithm. Every client can determine which node a key belongs to by plugging it into a hashing function, without a need for any special mapping servers or name nodes. Ignite data grid supports local, replicated, and partitioned data sets and allows to freely cross query between these data sets using standard SQL syntax. Ignite supports standard SQL for querying in-memory data including support for distributed SQL joins. Our data grid offers many features, some of which are:
Primary & Backup Copies. Near Caches. Cache queries and SQL queries. Continuous Queries. Transactions. Off-Heap Memory. Affinity Collocation. Persistent Store. Automatic Persistence. Data Loading. Eviction and Expiry Policies. Data Rebalancing Web Session Clustering. Hibernate L2 Cache. JDBC Driver. Spring Caching. Topology Validation. Streaming & CEP Ignite streaming allows to process continuous never-ending streams of data in scalable and fault-tolerant fashion. The rates at which data can be injected into Ignite can be very high and easily exceed millions of events per second on a moderately sized cluster. Real-time data is ingested via data streamers. We offer streamers for JMS 1.1, Apache Kafka, MQTT, Twitter, Apache Flume and Apache Camel already, and we keep adding new ones every release. The data can then be queried within sliding windows, if needed: Compute Grid Distributed computations are performed in parallel fashion to gain high performance, low latency, and linear scalability. Ignite compute grid provides a set of simple APIs that allow users distribute computations and data processing across multiple computers in the cluster. Distributed parallel processing is based on the ability to take any computation and execute it on any set of cluster nodes and return the results back. We support these features, amongst others:
Distributed Closure Execution. MapReduce & ForkJoin Processing. Clustered Executor Service. Collocation of Compute and Data. Load Balancing. Fault Tolerance. Job State Checkpointing. Job Scheduling. Service Grid Service Grid allows for deployments of arbitrary user-defined services on the cluster. You can implement and deploy any service, such as custom counters, ID generators, hierarchical maps, etc. Ignite allows you to control how many instances of your service should be deployed on each cluster node and will automatically ensure proper deployment and fault tolerance of all the services. Ignite File System Ignite File System (IGFS) is an in-memory file system allowing work with files and directories over existing cache infrastructure. IGFS can either work as purely in-memory file system, or delegate to another file system (e.g. various Hadoop file system implementations) acting as a caching layer. In addition, IGFS provides API to execute map-reduce tasks over file system data. Distributed Data Structures Ignite supports complex data structures in a distributed fashion:
Queues and sets: ordinary, bounded, collocated, non-collocated. Atomic types: AtomicLong and AtomicReference . CountDownLatch . ID Generators. Distributed Messaging Distributed messaging allows for topic based cluster-wide communication between all nodes. Messages with a specified message topic can be distributed to all or sub-group of nodes that have subscribed to that topic. Ignite messaging is based on publish-subscribe paradigm where publishers and subscribers are connected together by a common topic. When one of the nodes sends a message A for topic T, it is published on all nodes that have subscribed to T. Distributed Events Distributed events allow applications to receive notifications when a variety of events occur in the distributed grid environment. You can automatically get notified for task executions, read, write or query operations occurring on local or remote nodes within the cluster. Hadoop Accelerator Our Hadoop Accelerator provides a set of components allowing for in-memory Hadoop job execution and file system operations. MapReduce An alternate high-performant implementation of job tracker which replaces standard Hadoop MapReduce. Use it to boost your Hadoop MapReduce job execution performance. IGFS - In-Memory File System A Hadoop-compliant IGFS File System implementation over which Hadoop can run over in plug-n-play fashion and significantly reduce I/O and improve both, latency and throughput. Secondary File System An implementation of SecondaryFileSystem . This implementation can be injected into existing IGFS allowing for read-through and write-through behavior over any other Hadoop FileSystem implementation (e.g. HDFS). Use it if you want your IGFS to become an in-memory caching layer over disk-based HDFS or any other Hadoop-compliant file system. Supported Hadoop distributions
Apache Hadoop. Cloudera CDH. Hortonworks HDP. Apache BigTop. Spark Shared RDDs Apache Ignite provides an implementation of Spark RDD abstraction which allows to easily share state in memory across Spark jobs. The main difference between native Spark RDD and IgniteRDD is that Ignite RDD provides a shared in-memory view on data across different Spark jobs, workers, or applications, while native Spark RDD cannot be seen by other Spark jobs or applications. Repo Info Github Repo URL https://github.com/apache/ignite Github account name apache Repo name ignite
... View more
- Find more articles tagged with:
- Cloud & Operations
- help
- ignite
- utilities
02-11-2017
01:19 PM
1 Kudo
Repo Description Apache JMeter features include: Ability to load and performance test many different server/protocol types:
Web - HTTP, HTTPS SOAP / REST FTP Database via JDBC LDAP Message-oriented Middleware (MOM) via JMS Mail - SMTP(S), POP3(S) and IMAP(S) Native commands or shell scripts TCP Full multi-threading framework allows concurrent sampling by many threads and simultaneous sampling of different functions by separate thread groups. Careful GUI design allows faster Test Plan building and debugging. Caching and offline analysis/replaying of test results. Highly Extensible core:
Pluggable Samplers allow unlimited testing capabilities. Several load statistics may be chosen with pluggable timers. Data analysis and visualization plugins allow great extensibility and personalization. Functions can be used to provide dynamic input to a test or provide data manipulation. Scriptable Samplers (Groovy, BeanShell, BSF- and JSR223- compatible languages) Repo Info Github Repo URL https://github.com/apache/jmeter Github account name apache Repo name jmeter
... View more
- Find more articles tagged with:
- benchmark
- Cloud & Operations
- jmeter
- utilities
02-10-2017
06:33 PM
2 Kudos
Repo Description Apache Flink Apache Flink is an open source stream processing framework with powerful stream- and batch-processing capabilities. Learn more about Flink at http://flink.apache.org/ Features
A streaming-first runtime that supports both batch processing and data streaming programs Elegant and fluent APIs in Java and Scala A runtime that supports very high throughput and low event latency at the same time Support for event time and out-of-order processing in the DataStream API, based on the Dataflow Model Flexible windowing (time, count, sessions, custom triggers) accross different time semantics (event time, processing time) Fault-tolerance with exactly-once processing guarantees Natural back-pressure in streaming programs Libraries for Graph processing (batch), Machine Learning (batch), and Complex Event Processing (streaming) Built-in support for iterative programs (BSP) in the DataSet (batch) API Custom memory management for efficient and robust switching between in-memory and out-of-core data processing algorithms Compatibility layers for Apache Hadoop MapReduce and Apache Storm Integration with YARN, HDFS, HBase, and other components of the Apache Hadoop ecosystem Repo Info Github Repo URL https://github.com/apache/flink Github account name apache Repo name flink
... View more
- Find more articles tagged with:
- Data Ingestion & Streaming
- help
- streaming
- utilities
02-09-2017
08:28 PM
No, it is not possible: "A pivot is an aggregation where one (or more in the general case) of the grouping columns has its distinct values transposed into individual columns" Source: https://databricks.com/blog/2016/02/09/reshaping-data-with-pivot-in-apache-spark.html
... View more
02-08-2017
05:53 AM
1 Kudo
Very good question! Let's dig into Hadoop's source to find this out. The audit log uses java.net.InetAddress's toString() method to obtain a text format of the address: https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L7049 InetAddress's returns the information in "hostname/ip" format. If the hostname is not resolvable (reverse lookup is not working) then you get a starting slash: http://docs.oracle.com/javase/7/docs/api/java/net/InetAddress.html#toString()
... View more
02-03-2017
01:56 PM
It really depends on your use-case and latency requirements. If you need to store Storm's result into HDFS then you can use a Storm HDFS Bolt. If you only need to store the source data I'd suggest to store from Kafka or Flume. That'll result a lower latency on the Storm topology and better decoupling.
... View more
02-02-2017
08:59 PM
Deja vu: https://community.hortonworks.com/questions/80140/how-to-display-pivoted-dataframe-with-psark-pyspar.html#answer-80269
... View more
02-02-2017
12:15 PM
In Storm's nomenclature 'nimbus' is the cluster manager: http://storm.apache.org/releases/1.0.1/Setting-up-a-Storm-cluster.html Spark calls the cluster manager as 'master': http://spark.apache.org/docs/latest/spark-standalone.html
... View more
02-02-2017
11:39 AM
Hello, Both storm & spark supports local mode. In Storm you need to create a LocalCluster instance then you can submit your job onto that. You can find description and example in the links: http://storm.apache.org/releases/1.0.2/Local-mode.html https://github.com/apache/storm/blob/1.0.x-branch/examples/storm-starter/src/jvm/org/apache/storm/starter/WordCountTopology.java#L98 Spark's approach on local mode is somewhat different. The allocation is controlled through the spark-master variable which can be set to local (or local[*], local[N] where N is a number). If local is specified executors will be started on your machine. Both Storm and Spark has monitoring capabilities through a web interface. You can find details about them here: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_storm-component-guide/content/using-storm-ui.html http://spark.apache.org/docs/latest/monitoring.html Yarn is not a requirement but an option for distributed mode, both Spark & Storm is able to function on their own.
... View more
02-01-2017
09:12 PM
3 Kudos
The problem you're seeing is due to a bug in Ambari & (web)HDFS. First you need to upgrade your Ambari to latest version (2.4.2.0), then upgrade HDP.
... View more
02-01-2017
08:49 AM
Currently your parsing logic is based on a state machine. That approach won't work well with the idea of Spark. In Spark you'd need to load your data to a Dataset/Dataframe (or RDD) and do operations through that datastructure. I don't think that anybody will convert your code to Spark here and learning Spark would be inevitable anyways if you'd need to maintained the ported code. The lowest hanging fruit for you would be to make a try with Pypy interpreter which is more performant than cPython: http://pypy.org/ I've noticed in your code that you are reading in the file content in one go: lines = file.readlines() It would be more efficient to iterate through the file line by line: for line in open("Virtual_Ports.log", "r") I'd also suggest to use a profiler to see where your hotspots are. Hope this helps, Tibor
... View more
01-31-2017
06:51 PM
I don't think it is necessary to update the kernel. What's your motivation on this update?
... View more
01-30-2017
07:30 PM
Ambari should properly kerberize your cluster. Did you restart all the affected services after enabling kerberos?
... View more
01-30-2017
07:28 PM
One can get invalid argument if a process is holding up the files. You can try to stopping postgresql with 'service postgresql stop', then removing the files. As @Jay SenSharma suggested 'ambari-server reset' could also solve this problem.
... View more
01-29-2017
12:53 PM
The error indicates database corruption in PostgreSQL. I would suggest further investigating the corruption if you have data (other than Ambari's state) in the database. If you are sure that you'd like to remove the contents of /var/lib/pgsql directory then you should use the following command: sudo rm -rf /var/lib/pgsql/*
... View more
01-29-2017
08:00 AM
You were reading documentation of Hadoop 3.0, which is unreleased yet (alpha 2 is out, but that's not production ready). The most recent version of Hadoop in production is 2.7 for which you can find the respective guide here: https://hadoop.apache.org/docs/r2.7.0/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html
... View more
01-29-2017
07:56 AM
Seems like you have Kerberos enabled in HDFS but Hive does not have that setting. Please check your hive-site.xml and ensure that the following configuration items are set properly: hive.server2.authentication: Should be set to KERBEROS hive.server2.authentication.kerberos.principal: Set to Hive's principal hive.server2.authentication.kerberos.keytab: Points to your Hive keytab It is easiest to use Ambari for setup the cluster with Kerberos use. You can find the guide here: https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-security/content/ch_configuring_amb_hdp_for_kerberos.html
... View more
01-29-2017
07:32 AM
Hi Sachin, SmartSense is available for the Hortonworks customers who has signed up for our support plan. It provides monitoring for server configurations and provides suggestions if any of the services are misconfigured. If you do not have a Hortonworks Support Contract you can disable it. It seems that your original problem has been resolved. I'd suggest closing this thread by choosing a 'best answer' for any of the answers you think solved your problem. If you have further questions please post new question so that others can easily learn from it (without the need to understand the whole history if this thread). Thanks, Tibor
... View more
01-27-2017
09:08 PM
I believe Zeppelin only supports setting spark.app.name per interpreter at the moment: As a workaround you can make a try to duplicate the default spark interpreter and give unique spark.app.name to each newly created interpreter.
... View more
01-27-2017
08:10 PM
1 Kudo
After pivoting you need to run an aggregate function (e.g. sum) to get back a DataFrame/Dataset. After aggregation you'll be able to show() the data. You can find an excellent overview of pivoting at this website: https://databricks.com/blog/2016/02/09/reshaping-data-with-pivot-in-apache-spark.html
... View more
01-27-2017
07:59 PM
Seems like you have hdp-select package installed from 2.3.4.7 release while you are trying to install 2.3.6.0. Please provide further info to bring this problem into resolution: Are you trying to update from 2.3.4.7 to 2.3.6.0 ? Or have you installed manually 2.3.4.7's hdp-select tool?
... View more
01-27-2017
07:16 PM
On every node with ambari-agent installed: yum uninstall ambari-agent On ambari-server: yum uninstall ambari-server
... View more
01-27-2017
06:17 PM
Ambari 2.1.0.0 is a rather old version. I'd suggest taking the latest & greatest: Ambari 2.4.3 & HDP 2.5.3. You can find the Ambari install instructions here: http://docs.hortonworks.com/HDPDocuments/Ambari/Ambari-2.4.2.0/index.html
... View more
01-27-2017
05:45 PM
You have changed the repo from 2.3.6.0 to point to 2.3.4.7, which is an older release. This should not be necessary (unless you want that particular version). I'd suggest reverting your change and go with the default. If it is a fresh installation I'd suggest trying our latest release: HDP-2.5.3. You'll get tons of fixes, new features and a warm feeling of being up-to-date.
... View more
01-27-2017
05:04 PM
Seems like you have hdp-select package installed from 2.3.4.7 release while you are trying to install 2.3.6.0. Please provide further info to bring this problem into resolution: Are you trying to update from 2.3.4.7 to 2.3.6.0 ? Or have you installed manually 2.3.4.7's hdp-select tool?
... View more
01-27-2017
05:04 PM
Seems like you have hdp-select package installed from 2.3.4.7 release while you are trying to install 2.3.6.0. Please provide further info to bring this problem into resolution: Are you trying to update from 2.3.4.7 to 2.3.6.0 ? Or have you installed manually 2.3.4.7's hdp-select tool?
... View more