Member since
03-24-2016
184
Posts
239
Kudos Received
39
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2545 | 10-21-2017 08:24 PM | |
1559 | 09-24-2017 04:06 AM | |
5587 | 05-15-2017 08:44 PM | |
1682 | 01-25-2017 09:20 PM | |
5517 | 01-22-2017 11:51 PM |
11-01-2017
04:53 PM
@Tamas Bihari I had /cb/api/v1/stacks/user in my code but was calling /api/v1/stacks/user in my tests. Turns out it helps to call the correct API endpoint 🙂 The problem turned out to be the Invalid SSL Cert in mentioned above. I built a Spring application on top of Cloudbreak. I have an SSL context that trusts all certificates but was using default hostname verification. The previous instance I referred to had a valid certificate, so everything worked fine. When I installed the fresh instance of Cloudbreak, the generated certificate did not have the correct hostname. When I called the API, application threw Certificate exception but I was catching all Exceptions and handling it as if it was an auth token rejection. Added all trusting HostnameVerifier to resolve the exception. Thanks for putting a second pair of eyes on this. BTW... Implicit grant does not seem to require query parameter to be formatted as client_id=cloudbreak_shell≻ope.0=openid&source=login&redirect_uri=http://cloudbreak.shell The token obtained using: curl -iX POST -H "accept: application/x-www-form-urlencoded" -d 'credentials={"username":"admin@example.com","password":"cloudbreak"}' "http://***:8089/oauth/authorize?response_type=token&client_id=cloudbreak_shell" Seems to be valid.
... View more
10-31-2017
09:12 PM
On CB version 1.16.4, attempting to obtain Oauth token to access the rest API. (SSL Cert in place but wrong hostname) Call to UAA: curl -iX POST -H "accept: application/x-www-form-urlencoded" -d 'credentials={"username":"admin@example.com","password":"cloudbreak"}' "http://***:8089/oauth/authorize?response_type=token&client_id=cloudbreak_shell"
Response: HTTP/1.1 302 Found Server: Apache-Coyote/1.1 Cache-Control: no-store X-XSS-Protection: 1; mode=block X-Frame-Options: DENY X-Content-Type-Options: nosniff Location: http://cloudbreak.shell#token_type=bearer&access_token=eyJhbGciOiJIUzI1NiIsImtpZCI6ImxlZ2FjeS10b2tlbi1rZXkiLCJ0eXAiOiJKV1QifQ.eyJqdGkiOiJkZDVmMWUwMDNmNTQ0MzY2OTM1ODMzNTdiMTBhYjcwYyIsInN1YiI6IjIwOTllZGRjLThhMjktNDlhOC1iN2E1LTYzY2RlYTViNTVhZCIsInNjb3BlIjpbImNsb3VkYnJlYWsubmV0d29ya3MucmVhZCIsInBlcmlzY29wZS5jbHVzdGVyIiwiY2xvdWRicmVhay51c2FnZXMudXNlciIsImNsb3VkYnJlYWsucmVjaXBlcyIsImNsb3VkYnJlYWsudXNhZ2VzLmdsb2JhbCIsIm9wZW5pZCIsImNsb3VkYnJlYWsucGxhdGZvcm1zIiwiY2xvdWRicmVhay50ZW1wbGF0ZXMucmVhZCIsImNsb3VkYnJlYWsudXNhZ2VzLmFjY291bnQiLCJjbG91ZGJyZWFrLmV2ZW50cyIsImNsb3VkYnJlYWsuc3RhY2tzLnJlYWQiLCJjbG91ZGJyZWFrLmJsdWVwcmludHMiLCJjbG91ZGJyZWFrLm5ldHdvcmtzIiwiY2xvdWRicmVhay50ZW1wbGF0ZXMiLCJjbG91ZGJyZWFrLnNzc2Rjb25maWdzIiwiY2xvdWRicmVhay5wbGF0Zm9ybXMucmVhZCIsImNsb3VkYnJlYWsuY3JlZGVudGlhbHMucmVhZCIsImNsb3VkYnJlYWsuc2VjdXJpdHlncm91cHMucmVhZCIsImNsb3VkYnJlYWsuc2VjdXJpdHlncm91cHMiLCJjbG91ZGJyZWFrLnN0YWNrcyIsImNsb3VkYnJlYWsuY3JlZGVudGlhbHMiLCJjbG91ZGJyZWFrLnJlY2lwZXMucmVhZCIsImNsb3VkYnJlYWsuc3NzZGNvbmZpZ3MucmVhZCIsImNsb3VkYnJlYWsuYmx1ZXByaW50cy5yZWFkIl0sImNsaWVudF9pZCI6ImNsb3VkYnJlYWtfc2hlbGwiLCJjaWQiOiJjbG91ZGJyZWFrX3NoZWxsIiwiYXpwIjoiY2xvdWRicmVha19zaGVsbCIsInVzZXJfaWQiOiIyMDk5ZWRkYy04YTI5LTQ5YTgtYjdhNS02M2NkZWE1YjU1YWQiLCJvcmlnaW4iOiJ1YWEiLCJ1c2VyX25hbWUiOiJhZG1pbkBleGFtcGxlLmNvbSIsImVtYWlsIjoiYWRtaW5AZXhhbXBsZS5jb20iLCJhdXRoX3RpbWUiOjE1MDk0ODM3NTUsInJldl9zaWciOiJjNjk1OWFhIiwiaWF0IjoxNTA5NDgzNzU2LCJleHAiOjE1MDk1MjY5NTYsImlzcyI6Imh0dHA6Ly9sb2NhbGhvc3Q6ODA4MC91YWEvb2F1dGgvdG9rZW4iLCJ6aWQiOiJ1YWEiLCJhdWQiOlsiY2xvdWRicmVha19zaGVsbCIsImNsb3VkYnJlYWsucmVjaXBlcyIsIm9wZW5pZCIsImNsb3VkYnJlYWsiLCJjbG91ZGJyZWFrLnBsYXRmb3JtcyIsImNsb3VkYnJlYWsuYmx1ZXByaW50cyIsImNsb3VkYnJlYWsudGVtcGxhdGVzIiwiY2xvdWRicmVhay5uZXR3b3JrcyIsInBlcmlzY29wZSIsImNsb3VkYnJlYWsuc3NzZGNvbmZpZ3MiLCJjbG91ZGJyZWFrLnVzYWdlcyIsImNsb3VkYnJlYWsuc2VjdXJpdHlncm91cHMiLCJjbG91ZGJyZWFrLnN0YWNrcyIsImNsb3VkYnJlYWsuY3JlZGVudGlhbHMiXX0.Kae0YSVvVzyno1H-DcsCkjb88-UCTgVKeiseTezeRyo&expires_in=43199&scope=cloudbreak.networks.read%20periscope.cluster%20cloudbreak.usages.user%20cloudbreak.recipes%20cloudbreak.usages.global%20openid%20cloudbreak.platforms%20cloudbreak.templates.read%20cloudbreak.usages.account%20cloudbreak.events%20cloudbreak.stacks.read%20cloudbreak.blueprints%20cloudbreak.networks%20cloudbreak.templates%20cloudbreak.sssdconfigs%20cloudbreak.platforms.read%20cloudbreak.credentials.read%20cloudbreak.securitygroups.read%20cloudbreak.securitygroups%20cloudbreak.stacks%20cloudbreak.credentials%20cloudbreak.recipes.read%20cloudbreak.sssdconfigs.read%20cloudbreak.blueprints.read&jti=dd5f1e003f54436693583357b10ab70c Content-Language: en Content-Length: 0 Store TOKEN in ENV export TOKEN=eyJhbGciOiJIUzI1NiIsImtpZCI6ImxlZ2FjeS10b2tlbi1rZXkiLCJ0eXAiOiJKV1QifQ.eyJqdGkiOiJkZDVmMWUwMDNmNTQ0MzY2OTM1ODMzNTdiMTBhYjcwYyIsInN1YiI6IjIwOTllZGRjLThhMjktNDlhOC1iN2E1LTYzY2RlYTViNTVhZCIsInNjb3BlIjpbImNsb3VkYnJlYWsubmV0d29ya3MucmVhZCIsInBlcmlzY29wZS5jbHVzdGVyIiwiY2xvdWRicmVhay51c2FnZXMudXNlciIsImNsb3VkYnJlYWsucmVjaXBlcyIsImNsb3VkYnJlYWsudXNhZ2VzLmdsb2JhbCIsIm9wZW5pZCIsImNsb3VkYnJlYWsucGxhdGZvcm1zIiwiY2xvdWRicmVhay50ZW1wbGF0ZXMucmVhZCIsImNsb3VkYnJlYWsudXNhZ2VzLmFjY291bnQiLCJjbG91ZGJyZWFrLmV2ZW50cyIsImNsb3VkYnJlYWsuc3RhY2tzLnJlYWQiLCJjbG91ZGJyZWFrLmJsdWVwcmludHMiLCJjbG91ZGJyZWFrLm5ldHdvcmtzIiwiY2xvdWRicmVhay50ZW1wbGF0ZXMiLCJjbG91ZGJyZWFrLnNzc2Rjb25maWdzIiwiY2xvdWRicmVhay5wbGF0Zm9ybXMucmVhZCIsImNsb3VkYnJlYWsuY3JlZGVudGlhbHMucmVhZCIsImNsb3VkYnJlYWsuc2VjdXJpdHlncm91cHMucmVhZCIsImNsb3VkYnJlYWsuc2VjdXJpdHlncm91cHMiLCJjbG91ZGJyZWFrLnN0YWNrcyIsImNsb3VkYnJlYWsuY3JlZGVudGlhbHMiLCJjbG91ZGJyZWFrLnJlY2lwZXMucmVhZCIsImNsb3VkYnJlYWsuc3NzZGNvbmZpZ3MucmVhZCIsImNsb3VkYnJlYWsuYmx1ZXByaW50cy5yZWFkIl0sImNsaWVudF9pZCI6ImNsb3VkYnJlYWtfc2hlbGwiLCJjaWQiOiJjbG91ZGJyZWFrX3NoZWxsIiwiYXpwIjoiY2xvdWRicmVha19zaGVsbCIsInVzZXJfaWQiOiIyMDk5ZWRkYy04YTI5LTQ5YTgtYjdhNS02M2NkZWE1YjU1YWQiLCJvcmlnaW4iOiJ1YWEiLCJ1c2VyX25hbWUiOiJhZG1pbkBleGFtcGxlLmNvbSIsImVtYWlsIjoiYWRtaW5AZXhhbXBsZS5jb20iLCJhdXRoX3RpbWUiOjE1MDk0ODM3NTUsInJldl9zaWciOiJjNjk1OWFhIiwiaWF0IjoxNTA5NDgzNzU2LCJleHAiOjE1MDk1MjY5NTYsImlzcyI6Imh0dHA6Ly9sb2NhbGhvc3Q6ODA4MC91YWEvb2F1dGgvdG9rZW4iLCJ6aWQiOiJ1YWEiLCJhdWQiOlsiY2xvdWRicmVha19zaGVsbCIsImNsb3VkYnJlYWsucmVjaXBlcyIsIm9wZW5pZCIsImNsb3VkYnJlYWsiLCJjbG91ZGJyZWFrLnBsYXRmb3JtcyIsImNsb3VkYnJlYWsuYmx1ZXByaW50cyIsImNsb3VkYnJlYWsudGVtcGxhdGVzIiwiY2xvdWRicmVhay5uZXR3b3JrcyIsInBlcmlzY29wZSIsImNsb3VkYnJlYWsuc3NzZGNvbmZpZ3MiLCJjbG91ZGJyZWFrLnVzYWdlcyIsImNsb3VkYnJlYWsuc2VjdXJpdHlncm91cHMiLCJjbG91ZGJyZWFrLnN0YWNrcyIsImNsb3VkYnJlYWsuY3JlZGVudGlhbHMiXX0.Kae0YSVvVzyno1H-DcsCkjb88-UCTgVKeiseTezeRyo Call to CB API with TOKEN curl -k -X GET -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" https://***/stacks/13
{
"InvalidTokenException": {
"error": [
"invalid_token"
],
"error_description": [
"undefined"
]
}
}
Get Cloudbreak Shell Token from CBD Utils cbd util token
eyJhbGciOiJIUzI1NiIsImtpZCI6ImxlZ2FjeS10b2tlbi1rZXkiLCJ0eXAiOiJKV1QifQ.eyJqdGkiOiIwYWE4NmY2ZjgxMjA0OGVhYWQ5ZDg5NDFkZjllNzU2YSIsInN1YiI6IjIwOTllZGRjLThhMjktNDlhOC1iN2E1LTYzY2RlYTViNTVhZCIsInNjb3BlIjpbImNsb3VkYnJlYWsubmV0d29ya3MucmVhZCIsInBlcmlzY29wZS5jbHVzdGVyIiwiY2xvdWRicmVhay51c2FnZXMudXNlciIsImNsb3VkYnJlYWsucmVjaXBlcyIsImNsb3VkYnJlYWsudXNhZ2VzLmdsb2JhbCIsIm9wZW5pZCIsImNsb3VkYnJlYWsucGxhdGZvcm1zIiwiY2xvdWRicmVhay50ZW1wbGF0ZXMucmVhZCIsImNsb3VkYnJlYWsudXNhZ2VzLmFjY291bnQiLCJjbG91ZGJyZWFrLmV2ZW50cyIsImNsb3VkYnJlYWsuc3RhY2tzLnJlYWQiLCJjbG91ZGJyZWFrLmJsdWVwcmludHMiLCJjbG91ZGJyZWFrLm5ldHdvcmtzIiwiY2xvdWRicmVhay50ZW1wbGF0ZXMiLCJjbG91ZGJyZWFrLnNzc2Rjb25maWdzIiwiY2xvdWRicmVhay5wbGF0Zm9ybXMucmVhZCIsImNsb3VkYnJlYWsuY3JlZGVudGlhbHMucmVhZCIsImNsb3VkYnJlYWsuc2VjdXJpdHlncm91cHMucmVhZCIsImNsb3VkYnJlYWsuc2VjdXJpdHlncm91cHMiLCJjbG91ZGJyZWFrLnN0YWNrcyIsImNsb3VkYnJlYWsuY3JlZGVudGlhbHMiLCJjbG91ZGJyZWFrLnJlY2lwZXMucmVhZCIsImNsb3VkYnJlYWsuc3NzZGNvbmZpZ3MucmVhZCIsImNsb3VkYnJlYWsuYmx1ZXByaW50cy5yZWFkIl0sImNsaWVudF9pZCI6ImNsb3VkYnJlYWtfc2hlbGwiLCJjaWQiOiJjbG91ZGJyZWFrX3NoZWxsIiwiYXpwIjoiY2xvdWRicmVha19zaGVsbCIsInVzZXJfaWQiOiIyMDk5ZWRkYy04YTI5LTQ5YTgtYjdhNS02M2NkZWE1YjU1YWQiLCJvcmlnaW4iOiJ1YWEiLCJ1c2VyX25hbWUiOiJhZG1pbkBleGFtcGxlLmNvbSIsImVtYWlsIjoiYWRtaW5AZXhhbXBsZS5jb20iLCJhdXRoX3RpbWUiOjE1MDk0ODQxNDQsInJldl9zaWciOiJjNjk1OWFhIiwiaWF0IjoxNTA5NDg0MTQ0LCJleHAiOjE1MDk1MjczNDQsImlzcyI6Imh0dHA6Ly9sb2NhbGhvc3Q6ODA4MC91YWEvb2F1dGgvdG9rZW4iLCJ6aWQiOiJ1YWEiLCJhdWQiOlsiY2xvdWRicmVha19zaGVsbCIsImNsb3VkYnJlYWsucmVjaXBlcyIsIm9wZW5pZCIsImNsb3VkYnJlYWsiLCJjbG91ZGJyZWFrLnBsYXRmb3JtcyIsImNsb3VkYnJlYWsuYmx1ZXByaW50cyIsImNsb3VkYnJlYWsudGVtcGxhdGVzIiwiY2xvdWRicmVhay5uZXR3b3JrcyIsInBlcmlzY29wZSIsImNsb3VkYnJlYWsuc3NzZGNvbmZpZ3MiLCJjbG91ZGJyZWFrLnVzYWdlcyIsImNsb3VkYnJlYWsuc2VjdXJpdHlncm91cHMiLCJjbG91ZGJyZWFrLnN0YWNrcyIsImNsb3VkYnJlYWsuY3JlZGVudGlhbHMiXX0.xZgHAOTryXwbJN0DfaH_ISFU0IkLymTqlOmE2LZmKck
Store TOKEN in ENV export TOKEN=[token from above] Call to CB API with TOKEN curl -k -X GET -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" https://***/stacks/13
{
"InvalidTokenException": {
"error": [
"invalid_token"
],
"error_description": [
"undefined" This worked fine in CB 1.14.x. Has something changed in terms of how UAA issues tokens or what those tokens have access to?
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
10-21-2017
08:24 PM
@Arsalan Siddiqi First, don't inherit from both Process and Dataset for all entities. Define two types, one called Dataframe (inherit from) and the other called SparkProcess (inherit from Process). Create two entities from the Dataframe type and one entity from the SparkProcess type. Remember, you are defining datasets and processes that create them from other datasets. So the Stage type is more like a process and the state of the data (the data structure), is where lineage applies (driven by processing that happens int he stages). The main issue is most likely that Atlas will only draw lineage if it sees that a Process entity has a reference to at least one entity in the input array and at least one entity in the output array. So when you create the SparkProcess entity, put the reference to Source Dataframe entity in the input array and the reference to the Destination Dataframe in the output array. If all goes well, you should be able to click on either Dataframe entity and see lineage. The Process entity will not show lineage since it essentially acts as glue in the creation of lineage instead of being a dependent. But you should see links to each of the Dataframe entities in the input and output attributes of the process entity.
... View more
09-24-2017
04:06 AM
1 Kudo
@Amey Hegde I used your blue print to create a cluster via Cloud Break and I am able to enable Phoenix without any issues. Log into Ambari --> Select Hbase Service --> Click the Config Tab (Settings, not Advanced) -- > Scroll to the bottom of the page to the section called Phoenix SQL --> Click on the switch called "Enable Phoenix" --> Save Settings --> Restart all affected services . This will start an install and config process that will install Phoenix binaries and make a few configuration tweaks to hbase-site. If you SSH to the console, you can run: /usr/hdp/current/phoenix-client/bin/sqlline.py, you can immediately start creating tables. Once you get data loaded, you can issue queries from here as well.
... View more
05-15-2017
08:44 PM
@Arsalan Siddiqi You should just be able to bring up the execute processor and configure the command you have there as the command to execute. Just make sure you give it the full path the the spark-submit2.cmd executable (e.g. /usr/bin/spark-submit). As long as the file and path you are referencing is on the same machine as where Nifi is running (assuming it is only 1 box and is not clustered), and Spark client is present and configured correctly, the processor should just kick off the spark-submit. Make sure you change the scheduling to be something more than 0 seconds. Otherwise, you will quickly fill up the cluster where the job is being submitted with duplicate jobs. You can also set it to be CRON scheduled.
... View more
05-11-2017
03:10 AM
@khorvath I dug into the Ambari logs as you suggested. There is nothing obvious but Ambari is definitely returning success/complete as it begins to install services. I probably should have led with this, but, I am using early bits from Ambari 2.5.1 (the version where HDP and HDF can be installed in the same cluster). There is probably some sort of disconnect between what Salt is looking far and what Ambari is actually returning. Perhaps this is already being addressed in CB 1.15.0. Thanks your your help.
... View more
05-10-2017
08:43 PM
cbd-log.txt @khorvath That's exactly what happened. Any idea why?
... View more
05-10-2017
07:46 PM
Cloudbreak 1.14.0 OpenStack I created a new Post type recipe. I have done this many times before and up until today, Post recipes would start execution after the Ambari Blueprint is completely finished installing. That is, all services in Amabri are fully installed and started. For some reason, this recipe begins execution as soon as Ambari starts installing the Blueprint. I am aware that Cloudbreak 1.14.0 is TP but I have been using this version for the last month. Is there something special that is needs to happen to ensure that a Post type recipe begins execution after all Ambari services have completed execution?
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
01-25-2017
09:20 PM
1 Kudo
@Joby Johny The demo has been migrated to HDP 2.5 and will no longer run on HDP 2,4 Sandbox.This is mainly due to upgrade of Storm from 0.10 to 1.0.1. We did not maintain the older branch. The demo will still install and run in a single node Sandbox but it has to be 2.5 and you will need to be able to allocate more memory. If you can get a large host that can allocate 16GB to the container, you can try to download the docker version and give it more memory.
... View more
01-23-2017
06:28 AM
1 Kudo
@Sankaraiah Narayanasamy Not unless you create a Hive table using an Hbase storage handler: https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration This will impose a schema onto an Hbase table through Hive and save the schema in the metastore. Once it's in the metastore, you can access it through HiveContext. Or if you have Phoenix installed and you create a table through Phoenix, it will create am Hbase table as well as a schema catalog table. You can do a direct JDBC connection to Phoenix just like you would connect to mysql or postgres. You just need to use the Phoenix JDBC driver. You can then use meta data getters on the JDBC connection object to get the tables in the Phoenix. Once you know the table you want to go after import org.apache.phoenix.spark._ val df = sqlContext.load("org.apache.phoenix.spark", Map("table"->"phoenix_table","zkUrl"->"localhost:2181:/hbase-unsecure")) df.show This way, Spark will load data using executors in parallel. Now just use the Data Frame with the SQL context like normal.
... View more