Member since
09-30-2015
41
Posts
20
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2012 | 02-10-2017 09:20 PM | |
3095 | 08-09-2016 01:05 PM |
08-11-2017
01:50 PM
Hi Sunil, Take a look at this tutorial - https://hortonworks.com/tutorial/tag-based-policies-with-apache-ranger-and-apache-atlas/ to confirm your steps. If you still see that message, check http://sandbox.hortonworks.com:8886/solr/#/~cloud to confirm you have a ranger_audits collection. If you don't see it, I recommend restarting the Ambari Infra service. After you see the collection in Solr, you should see the audit logs:
... View more
02-10-2017
09:33 PM
I also recommend that we review one issue per HCC topic. I believe my initial comment answered the original question, but please mark it as so if you agree. Thanks
... View more
02-10-2017
09:32 PM
Hmmm.. interesting. I'm not sure how the postgres DB failed or stopped, but now that we've established this is the Sandbox, I'd suggest just killing this instance and upload a new Sandbox instance and try the steps I've outlined in my first reply.
... View more
02-10-2017
09:20 PM
What's the output of ambari-server status command? You should see something like: [root@sandbox ~]# ambari-server status
Using python /usr/bin/python
Ambari-server status
Ambari Server running
Found Ambari Server PID: 1356 at: /var/run/ambari-server/ambari-server.pid
[root@sandbox ~]# If it's stopped, try: ambari-server start
ambari-agent restart Then try the Ambari URL again.
... View more
02-10-2017
08:53 PM
2 Kudos
Hi @Manish, You cannot run the ambari-server setup command on the Hortonworks Sandbox. It's already been completed for you. Once the Sandbox is running, go to http://127.0.0.1:8888/ to get to the welcome page and to enable your access to the Ambari UI. See blow: Then click on Quick Links on the botton right to get the Ambari access instructions:
... View more
02-07-2017
07:19 PM
What were your steps prior to invoking this command? For example, did Ambari server setup complete successfully?
... View more
02-07-2017
07:03 PM
@Aleksandar Razmovski, did you verify you DNS settings to confirm Ambari node can see the other nodes and vice versa? The logs doesn't appear to report a fully qualified domain name (FQDN). The information to verify this is here.
... View more
02-06-2017
02:52 PM
Looks like the kinit is working. Did you try beeline connection and was is successful?
... View more
02-03-2017
07:01 PM
I can't speak to the logging issue just yet, but is there a problem with the cluster behavior? Can you: kinit -k -t keytab principal
Connection string to connect with beeline !connect jdbc:hive2://hostname:10000/default;principal=hive/_HOST@REALM
... View more
01-25-2017
04:42 PM
- also deletion of an HAR file must be tracked in protocols that are also safe against manipulation Ranger has audit capabilities and it integrates with AD/LDAP services. - it must be possible to preserve the deletion of archives for a defined period of time (for example 10 years) Within Ranger you can remove users access to the files and you can use HDFS for archival (it's pretty good for that ;). If users need to access this "cold" data again, just enable the permission within Ranger. From the user perspective the file was "deleted." To @mqureshi's point, you'll need to think about the application layer. You don't want load these small files one at a time into HDFS and the app you pick can help you enforce some of your requirements. You can use Nifi to acquire, route, and transform the HAR data as well prior to landing into HDFS so look into that as well. Hope this helps,
... View more
01-25-2017
04:03 PM
1 Kudo
Hi @Alexander Lösel, can you expand what you mean by "revision safe"? If you want read only access for users on those files you can specify that within Ranger. You can manually set HDFS ACL permissions via command line but Ranger is the way to go if you're planning access in a multi-tenant environment.
... View more
09-14-2016
12:29 AM
3 Kudos
Just some clarifications to the instructions provided in this article. Hopefully, it will save you time to get you up and running faster. 1) Nifi 401:Unauthorized Error If you see a similar error message in the Nifi console: ERROR [Timer-Driven Process Thread-6] o.a.nifi.processors.standard.PostHTTP PostHTTP[id=834bb9f9-a15d-42bd-8d7a-3f00c810d729] Failed to Post StandardFlowFileRecord[uuid=bc74e1c5-12e7-4da2-93b4-a3dc624218ac,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1471370511531-1, container=default, section=1], offset=11780, length=407],offset=0,name=12406098021784,size=407] to http://gcm-http.googleapis.com/gcm/send: response code was 401:Unauthorized and when you click "Notify Customer" in the Analyst's console and no event is updated on the Android emulator in Android Studio. Resolution 401:Unauthorized Error Open open the UpdateAttribute processor and add a new property called Authorization and set its value to key=your google browser key 2) Wrong Google API number The API project number is 12 digits. Do not use the number at the end of the Google API Project ID. For example, ID: api-project-555555555 Resolution Google API number Use the number given in the Google API console at Project Number (see screenshot below). This will to successfully link from the Sandbox to the mobile emulator; otherwise, you'll click "Notify Customer" in the Analyst's console and no event is updated on the Android emulator in Android Studio. 3) Mobile Application Compilation FYI I thought it helps to provide and example to this section of the Readme because it severely edits the XML syntax. Under the res folder, browse to:
app-->res-->values-->google_maps_api.xml (debug):
string name="google_maps_key" templateMergeStrategy="preserve" translatable="false" ENTER YOUR GOOGLE BROWSER KEY CREDENTIAL HERE /string
Here's what it looks like in Android Studio with xml syntax preserved shown in green box below between "><" identifying the string. Hope this helps!
... View more
- Find more articles tagged with:
- Design & Architecture
- FAQ
- hdf
- HDP
- How-ToTutorial
Labels:
08-30-2016
10:30 PM
@hari Kishore javvaji,
The kinit command can renew and/or obtain the Kerberos ticket. I believe the warning is telling the ticket is expired and can't be renewed even if you wanted to. Try a kdestroy to prior to kinit -kt <keytab> <prin.> see if the warning goes away. Regarding your question about ticket lifetime vs ticket renewable, here's how I'd summarize it: The ticket cannot be used at the end of the ticket lifetime. If the renewable lifetime is longer than ticket lifetime (like yours), the user holding the ticket, can renew the ticket before the ticket lifetime or renewal time expires. If renewed, the fresh ticket will have a new lifetime dating to the current time but renewals are constrained by renew lifetime.
... View more
08-29-2016
03:26 PM
How are you obtaining the ticket? 'kinit -R'? If you run the 'klist' command does the ticket have same values for its "valid starting" and "renew until" times? If so, the ticket is non-renewable. The warning might be indicating that. Note that whether or not you can obtain renewable tickets depends on a KDC-wide setting, as well as a per-principal setting for both the principal in question and the Ticket Granting Ticket (TGT) service principal for the realm. For example, for a MIT KDC, there is krb5.conf setting: https://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Ambari_Security_Guide/content/_optional_install_a_new_mit_kdc.html
... View more
08-15-2016
04:43 PM
1 Kudo
There is a similar issue here https://community.hortonworks.com/questions/23132/i-am-getting-error-in-oozie-workflow-what-i-have-d.html. Please try the recommended suggestion in that post and update the post if it worked or not.
... View more
08-09-2016
01:05 PM
1 Kudo
https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-BasicDesignfor ... ACID transactions shouldn't impact analytical queries while inserts are happening. A read can take a version and a write can put a new version of the data without waiting on locks. But this adds overhead of reading the delta files when you read the main ORCFile. As delta files accumulate you'll need to compact/consolidate the edits which will use cluster resources. The impact depends on the number of updates. Hive ACID should be used for low concurrency 50 or fewer concurrent users. In general, I recommend using Hive best practices for Hive query performance - http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_performance_tuning/content/ch_hive_hi_perf_best_practices.html
... View more
06-06-2016
03:38 PM
Hi Roberto, Glad the article was helpful! In reply to your questions: 1. Correct, the HDP version is not listed on the Azure Marketplace website. It's certainly something will consider. I believe we were trying to reduce the burden on the Microsoft site admins to constantly manage version/link/documentation links. The HDP Azure Marketplace version should match the release cadence of HDP. And after you deploy HDP you can always check the HDP version in Ambari by going to Admin --> Stacks and Versions --> Version. The latest documentation and HDP releases notes are always here. 2. The Azure Marketplace deployment is great for non-elastic clusters and to start running a pilot use case. If ease of elasticity is a core requirement, take a look at Azure HDInsight to spin up more nodes on demand automatically. Not all the HDP services are on HDInsights but it may be a great option for your pilot. Thanks, Ameet
... View more
05-31-2016
03:26 PM
1 Kudo
I've received a couple of "how to" questions after folks successfully deploy the Hortonworks Data Platform Standard on Microsoft's Azure. I've collected my responses here as a reference to others: What is Hortonworks Data Platform (HDP) Standard? It is a muti-node HDP
2.4/Ambari 2.2.1 cluster on Microsoft’s Azure Cloud launched in a few mouse clicks. Hortonworks aims to match this service with the latest version of HDP. You
provide:
your
name email passwords or ssh key the
number of nodes the
VM types for your masters and workers should
your cluster be HA or not as shown in this screenshot: Where's Ambari? Once the cluster is successfully deployed, the Azure Dashboard will go from something like: to the Ambari service is located on the first master server. In the Azure portal goto Resource group, <your resource group name which was selected at first "Basics" step>, master1, settings and look for Public IP address. Use a web browser to access Ambari with: <master1 Public IP address>:8080 What's the Ambari username? The default username is "admin". The password was set under "Ambari password" in the screenshot above. What are my ssh parameters? The HDP service ports are enabled by default during the cluster installation. The master nodes allow external ssh access so use the cluster creation fields in the screenshot above in a terminal: ssh <cluster admin username>@<your cluster name>-master-01.cloudapp.net Worker nodes are only accessible via ssh from any of the master nodes. Why am I receiving "Operation result in exceeding quota limits on Core"? The default Azure Resource Manager (ARM) cores is 20 and not enough to deploy an HDP Standard cluster. Prior to deploying the cluster, request a ARM core quota increase to at least 120. Details to request a quota increase are here and remember ARM core resources are Azure region specific.
... View more
- Find more articles tagged with:
- azure
- Cloud & Operations
- FAQ
- hdp2.4
Labels:
03-28-2016
05:10 PM
Hi @marksf, what are the system specs? CPU, RAM, etc. Not that this is the root cause of the problem but it can help as we start to suggest options. Was this MySQL service a fresh install with HDP or was it existing?
... View more
02-05-2016
03:56 PM
4 Kudos
Here are some lessons learned while trying to deploy the latest version of Cloudbreak (1.1.0) on Azure. Please refer to the latest Hortonworks Cloudbreak documentation for the detailed steps and use this article as a supplement until the documents are updated. Logging into the Deployer VM There is a prebuilt image for Azure Cloudbreak deployer. This image does not require a ssh key; however, record the username and password you've specified at setup in the Azure portal (see the section in green highlight below). Those are your credentials to login to the deployer VM and setup Cloudbreak services later. Once the VM deploys (refer to the Azure portal for status), grab the the Public IP specified in the Azure portal and ssh into the VM. Ssh into the Deployer VM by, ssh <your specified username>@<public IP specified in Azure portal>
password: <enter you specified password> Deployer VM run initiation steps Once you've logged into the Deployer VM, run these initiation steps which are currently missing from the document: Create a file called Profile cd ~
vi ./Profile
Add the following, export PUBLIC_IP=<the public IP address of the Deployer VM> You can always find the Deployer VM's public IP address in the Azure portal. And do not leave a space between the equals and IP e.g. PUBLIC_IP=12.34.245 rather than PUBLIC_IP= 12.34.245 Now run the following, cbd init The output looks something like, Profile already exists, now you are ready to run:
cbd generate
===> Deployer doctor: Checks your environment, and reports a diagnose.
local version:1.1.0
latest release:1.1.0
docker command: OK
docker client version: OK
docker server version: OK Now run cbd generate and enter the VM’s password you specified at setup when prompted. The output looks something like: generating docker-compose.yml
generating uaa.yml
At this point you can run the 'cbd' commands for the Azure application setup with Cloudbreak Deployer and deployment of a DASH service in Cloudbreak Deployer as shown in the document. Displaying your Cloudbreak UI credentials Run the following command to output your Cloudbreak UI credentials (Note: you don't use your Azure AD user for this login), cbd login It will output something like, Uluwatu (Cloudbreak UI) url:
http://<Deployer VM's public IP>:3000
login email:
***@******.com
password:
*********
Request an Azure quota increase Lastly, Azure has a default limit of 20 cores in a region. Follow these steps to request a quota increase because the Deployer VM together with a deployment of the hdp-small-default Ambari blueprint will exceed the default core limit resources. Happy Cloudbreak deploying!!
... View more
- Find more articles tagged with:
- azure
- Cloud & Operations
- Cloudbreak
- FAQ
- microsoft azure
Labels:
02-03-2016
05:18 PM
2 Kudos
Awesome! I was able to create the issue and verify the workaround you've posted. As an FYI to others, here are the detailed steps to resolve this issue: To use Ambari to manage (start/stop) the Zeppelin service, run the following commands on the node running Ambari server. For example, on CentOS 6.*: yum install -y git
VERSION=`hdp-select status hadoop-client | sed 's/hadoop-client - \([0-9]\.[0-9]\).*/\1/'`
sudo git clone https://github.com/hortonworks-gallery/ambari-zeppelin-service.git /var/lib/ambari-server/resources/stacks/HDP/$VERSION/services/ZEPPELIN
sudo service ambari-server restart On a node (call it 'Node A') that is not running Ambari server, install the nss package: yum install -y nss Once Ambari is back up and you've install the nss package to "Node A", in Ambari, go Actions -> Add service -> check Zeppelin service -> Place the Zeppelin service on Node A in the assign masters step and click Next -> Next -> Next -> Deploy. The installation will start once you click Deploy Once complete, the Zeppelin Notebook service will be running. You can navigate to http://<FQDN of Node A>:9995 or follow the steps here to create the Ambari view.
... View more
02-02-2016
04:13 PM
@Mangesh Kaslikar if you're still running into trouble, please list the steps/commands you executed so others can try to reproduce your issue. Thanks.
... View more
12-04-2015
07:33 PM
@Ali Bajwa and @Dhruv Kumar, thanks for the suggestions. Like you I could not reproduce this on a fresh install. I no longer have access to the environment that was showing this behavior but I know it had gone through multiple Zeppelin version changes and perhaps that caused this behavior...
... View more
12-04-2015
03:32 PM
After following the Apache Zeppelin setup provided here - https://urldefense.proofpoint.com/v2/url?u=http-3A... Zeppelin notebook does not show output after executing commands successfully. Here's a subset of the errors seen in YARN logs: Stack trace: ExitCodeException exitCode=1: /grid/1/hadoop/yarn/local/usercache/root/appcache/application_1447968118518_0003/container_e03_1447968118518_0003_02_000004/launch_container.sh: line 23: :/usr/hdp/current/spark-client/bin/zeppelin-0.6.0-incubating-SNAPSHOT/interpreter/spark/dep/*:/usr/hdp/current/spark-client/bin/zeppelin-0.6.0-incubating-SNAPSHOT/interpreter/spark/*:/usr/hdp/current/spark-client/bin/zeppelin-0.6.0-incubating-SNAPSHOT/lib/*:/usr/hdp/current/spark-client/bin/zeppelin-0.6.0-incubating-SNAPSHOT/*::/usr/hdp/current/spark-client/bin/zeppelin-0.6.0-incubating-SNAPSHOT/conf:/usr/hdp/current/spark-client/bin/zeppelin-0.6.0-incubating-SNAPSHOT/conf:/usr/hdp/current/spark-client/bin/zeppelin-0.6.0-incubating-SNAPSHOT/conf:/etc/hadoop/conf:$PWD:$PWD/__spark__.jar:$HADOOP_CONF_DIR:/usr/hdp/current/hadoop-client/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-hdfs-client/*:/usr/hdp/current/hadoop-hdfs-client/lib/*:/usr/hdp/current/hadoop-yarn-client/*:/usr/hdp/current/hadoop-yarn-client/lib/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure: bad substitution Noticed the mapred-site.xml had "${hdp.version}" variables that were not replaced. The workaround was replacing the variable with the actual hdp version in the mapred-site.xml then restarting. See the screenshot below: This is posted as an FYI in case anyone else runs into a similar issue. I don't have a root cause for this behavior at this time.
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache Zeppelin
10-23-2015
02:47 PM
1 Kudo
@Ronald, take a look at this workshop - https://github.com/abajwa-hw/solr-stack.
... View more