Member since
02-08-2016
793
Posts
669
Kudos Received
85
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2403 | 06-30-2017 05:30 PM | |
3138 | 06-30-2017 02:57 PM | |
2574 | 05-30-2017 07:00 AM | |
3045 | 01-20-2017 10:18 AM | |
5991 | 01-11-2017 02:11 PM |
01-11-2017
02:35 PM
Hi Geoffrey, Assigning the network adapter in settings as per your doc. worked fine.
Now I'm able to ssh to VB. Thanks.
... View more
03-08-2017
08:00 AM
Only Cloudera SRPMs are available. http://archive.cloudera.com/cdh5/redhat/6/x86_64/cdh/5.10/
... View more
01-06-2017
12:43 PM
Problem Statement: Whenever HBase is running, the "hdfs fsck /" reports four hbase-related files in the path "hbase/data/WALs/" as CORRUPT. Even after letting the cluster sit idle for a couple hours, it is still in the corrupt state. If HBase is shut down, the problem goes away. If HBase is then restarted, the problem recurs. ERROR: hdfs@test1:>$ hdfs fsck /
Connecting to namenode via http://test1.example.com:50070/fsck?ugi=hdfs&path=%2F
FSCK started by hdfs (auth:SIMPLE) from /39.0.8.2 for path / at Wed Jun 24 20:40:17 GMT 2015
...
/apps/hbase/data/WALs/test2.example.com,16020,1435168292684/test2.example.com%2C16020%2C1435168292684.default.1435175500556: MISSING 1 blocks of total size 83 B.
/apps/hbase/data/WALs/test3.example.com,16020,1435168290466/test3.example.com%2C16020%2C1435168290466..meta.1435175562144.meta: MISSING 1 blocks of total size 83 B.
/apps/hbase/data/WALs/test3.example.com
,16020,1435168290466/test3.example.com%2C16020%2C1435168290466.default.1435175498500: MISSING 1 blocks of total size 83 B.
/apps/hbase/data/WALs/est4.example.com,16020,1435168292373/test4.example.com%2C16020%2C1435168292373.default.1435175500301: MISSING 1 blocks of total size 83 B..................................................................................................
....................................................................................................
....................................................................................................
........................................................................................Status: CORRUPT
Total size: 723977553 B (Total open files size: 332 B)
Total dirs: 79
Total files: 388
Total symlinks: 0 (Files currently being written: 5)
Total blocks (validated): 387 (avg. block size 1870743 B) (Total open file blocks (not validated): 4)
********************************
UNDER MIN REPL'D BLOCKS: 4 (1.0335917 %)
dfs.namenode.replication.min: 1
CORRUPT FILES: 4
MISSING BLOCKS: 4
MISSING SIZE: 332 B
********************************
Minimally replicated blocks: 387 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 3.0
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 3
Number of racks: 1
FSCK ended at Wed Jun 24 20:40:17 GMT 2015 in 7 milliseconds
The filesystem under path '/' is CORRUPT
hdfs@test1:>
ROOT CAUSE: This is a BUG with HDFS - https://issues.apache.org/jira/browse/HDFS-8809 There is a patch provided for resolving this issue. This will be soon fixed in upcoming HDP releases.
... View more
Labels:
01-04-2017
08:41 AM
2 Kudos
Below article will guide you on how to access Ranger KMS policies via rest api - 1. I have Cluster with Ranger and Ranger KMS installed. 2. From the documentation it is clearly given on "how to access Ranger policies using rest api". Please check the link below - https://cwiki.apache.org/confluence/display/RANGER/REST+APIs+for+Policy+Management 3. Below is the example for Range rest api GET method - a.Sample Ranger Policy for HDFS repository in UI is as below - Below is the rest api GET method call which we can use to get the policy as displayed above - curl -iv -u <username>:<password> -H "Content-type:application/json" -X GET http://localhost:6080/service/public/api/repository/{id}
Eg. In above screenshot my policy id is "2"
curl -iv -u admin:admin -H "Content-type:application/json" -X GET http://localhost:6080/service/public/api/repository/2
4. Below is my Ranger KMS UI with policy - But if you try the same steps for Ranger KMS it will fail - curl -iv -u <username>:<password> -H "Content-type:application/json" -X GET http://localhost:6080/service/public/api/repository/{id}
Eg. In above screenshot my policy id is "1"
curl -iv -u admin:admin -H "Content-type:application/json" -X GET http://localhost:6080/service/public/api/repository/1 ERROR: [root@localhost ~]# curl -iv -u admin:admin -H "Content-type:application/json" -X GET http://localhost:6080/service/public/api/repository/1
* About to connect() to khichadi1.openstacklocal port 6080 (#0)
* Trying 172.26.81.49... connected
* Connected to khichadi1.openstacklocal (172.26.81.49) port 6080 (#0)
* Server auth using Basic with user 'admin'
> GET /service/public/api/repository/1 HTTP/1.1
> Authorization: Basic YWRtaW46YWRtaW4=
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: khichadi1.openstacklocal:6080
> Accept: */*
> Content-type:application/json
>
< HTTP/1.1 204 No Content
HTTP/1.1 204 No Content
< Server: Apache-Coyote/1.1
Server: Apache-Coyote/1.1
< Set-Cookie: JSESSIONID=30095A2BD0FBFB384F24734183851135; Path=/; HttpOnly
Set-Cookie: JSESSIONID=30095A2BD0FBFB384F24734183851135; Path=/; HttpOnly
< X-Frame-Options: DENY
X-Frame-Options: DENY
< Content-Type: application/json
Content-Type: application/json
< Date: Wed, 04 Jan 2017 08:32:57 GMT
Date: Wed, 04 Jan 2017 08:32:57 GMT
<
* Connection #0 to host khichadi1.openstacklocal left intact
* Closing connection #0
5. Since the Rest api for Ranger and Ranger KMS is little bit different. Below is how it works - In above examples instead of /service/public/api/repository/{id} you need to use /service/plugins/policies/{id} for Ranger KMS 6. Below is the sample example for GET method for Ranger KMS - curl -iv -u <username>:<password> -H "Content-type:application/json" -X GET http://localhost:6080/service/plugins/policies/{id}
Eg. In above screenshot my policy id is "1"
curl -iv -u admin:admin -H "Content-type:application/json" -X GET http://localhost:6080/service/plugins/policies/1 For rest of the method like create/update/delete you can use above examples replacing the method type. Refer example for details on - https://cwiki.apache.org/confluence/display/RANGER/Apache+Ranger+0.6+-+REST+APIs+for+Service+Definition%2C+Service+and+Policy+Management Do modify accordingly for Ranger KMS rest api for above examples in link. Let me know if you have any questions for above article. Thanks.
... View more
Labels:
12-28-2016
07:28 PM
3 Kudos
SYMPTOM:
Ambari upgrade command gives error as Error Code: 1005. Can't
create table 'ambaridb.#sql-4168_34d2' (errno: 150)
ERROR: CREATE TABLE blueprint_setting (id BIGINT NOT NULL, blueprint_name VARCHAR(255) NOT NULL, setting_name VARCHAR(255) NOT NULL, setting_data LONGTEXT NOT NULL)
03 Oct 2016 09:30:06,326 INFO [main] DBAccessorImpl:824 - Executing query: ALTER TABLE blueprint_setting ADD CONSTRAINT PK_blueprint_setting PRIMARY KEY (id)
03 Oct 2016 09:30:06,388 INFO [main] DBAccessorImpl:824 - Executing query: ALTER TABLE blueprint_setting ADD CONSTRAINT UQ_blueprint_setting_name UNIQUE (blueprint_name, setting_name)
03 Oct 2016 09:30:06,489 INFO [main] DBAccessorImpl:824 - Executing query: ALTER TABLE blueprint_setting ADD CONSTRAINT FK_blueprint_setting_name FOREIGN KEY (blueprint_name) REFERENCES blueprint (blueprint_name)
03 Oct 2016 09:30:06,545 ERROR [main] DBAccessorImpl:830 - Error executing query: ALTER TABLE blueprint_setting ADD CONSTRAINT FK_blueprint_setting_name FOREIGN KEY (blueprint_name) REFERENCES blueprint (blueprint_name)
java.sql.SQLException: Can't create table 'ambaridb.#sql-4168_34e3' (errno: 150)
SHOW ENGINE INNODB STATUS;
command shows below details,
------------------------
LATEST FOREIGN KEY ERROR
------------------------
161003 9:30:06 Error in foreign key constraint of table ambaridb/#sql-4168_34e3:
FOREIGN KEY (blueprint_name) REFERENCES blueprint (blueprint_name):
Cannot find an index in the referenced table where the
referenced columns appear as the first columns, or column types
in the table and the referenced table do not match for constraint.
Note that the internal storage type of ENUM and SET changed in
tables created with >= InnoDB-4.1.12, and such columns in old tables
cannot be referenced by such columns in new tables.
See http://dev.mysql.com/doc/refman/5.1/en/innodb-foreign-key-constraints.html
for correct foreign key definition.
ROOT CAUSE: === The issue was
related to a mismatch on the character sets of the tables in MySQL. When Ambari
initially created the database it set the tables to use UTF8. During the
upgrade, new tables were created via the UpgradeCatalog classes set to use the
LATIN1 character set. More specifically, the blueprint was created setting the
character set to UTF8 and the blueprint_setting table was created using the
LATIN1 character set. ---- Below was seen using
a query like: SELECT character_set_name FROM information_schema.`COLUMNS`
WHERE table_schema = "ambari"
AND table_name = "blueprint";
SELECT character_set_name FROM information_schema.`COLUMNS`
WHERE table_schema = "ambari"
AND table_name = "blueprint_setting";
RESOLUTION: This was fixed by dropping the
blueprint_setting table and manually creating table using below syntax - [using
- CHARACTER SET as utf8] CREATE TABLE blueprint_setting (
id BIGINT NOT NULL,
blueprint_name VARCHAR(100) NOT NULL,
setting_name VARCHAR(100) NOT NULL,
setting_data MEDIUMTEXT NOT NULL,
CONSTRAINT PK_blueprint_setting PRIMARY KEY (id),
CONSTRAINT UQ_blueprint_setting_name UNIQUE(blueprint_name,setting_name),
CONSTRAINT FK_blueprint_setting_name FOREIGN KEY (blueprint_name) REFERENCES blueprint(blueprint_name))
CHARACTER SET utf8;
... View more
Labels:
12-28-2016
07:23 PM
3 Kudos
ISSUE: While upgrading HDP, Ranger service
failed. Below was the error - Failed to apply patch 020-datamask-policy.sql”
with error “Not able to drop table ‘x_datamast_type_’def” Foreign constraints
fails Error Code 1217
ERROR: ROOT CAUSE: Issue was with foreign_key_checks RESOLUTION: “SET foreign_key_checks = 0;” in mysql ranger DB and dropped the table manually and re-created and again “SET foreign_key_checks = 1;”
... View more
Labels:
12-28-2016
07:19 PM
3 Kudos
Issue: While
performing HDP downgrade the last "Finalize Downgrade" step went
successfully but 'Downgrade in Progress' is still struck on 99%. Please find the screenshot below - ERROR: ROOT CAUSE: There
are few task which are in PENDING state from table host_role_command Below is sample
output -
SELECT task_id, status, event, host_id, role,
role_command, command_detail, custom_command_name FROM host_role_command
WHERE request_id = 858 AND status != 'COMPLETED' ORDER BY task_id DESC
8964, PENDING, 4, KAFKA_BROKER, CUSTOM_COMMAND,
RESTART KAFKA/KAFKA_BROKER, RESTART
8897, PENDING, 4, KAFKA_BROKER, CUSTOM_COMMAND, STOP KAFKA/KAFKA_BROKER,
STOP
RESOLUTION: We need
to manually move the task to COMPLETED state
UPDATE host_role_command SET status =
'COMPLETED' WHERE request_id = 858 AND status = 'PENDING'
... View more
12-28-2016
07:16 PM
3 Kudos
Issue:
When user login no service
tabs/action button were displayed in Ambari UI.
The Ambari UI was displaying
below notification on dashboard - "Move Master Wizard In
Progress" ERROR: ROOT CAUSE: Seems user who has admin access to Ambari UI
was doing some operation which was left open. The user is no more currently
online. This
was related to hortonworks Internal Jira -https://hortonworks.jira.com/browse/EAR-4843 RESOLUTION: To get passed this problem
we run, and set the UserName of the json to admin then when login as admin we
were able to close the wizard and solved the problem.
curl -u admin -i -H
'X-Requested-By: ambari' -X POST -d '{"wizard-data":"
{\"userName\":\"admin\",\"controllerName\":\"<Controller_Name>\"}
"}' http://<ambari-host>:8080/api/v1/persist
Below was command I
used -
curl -u admin -i -H
'X-Requested-By: ambari' -X POST -d '{"wizard-data":"
{\"userName\":\"admin\",\"controllerName\":\"moveMasterWizard\"}
"}' http://localhost:8080/api/v1/persist
... View more
Labels:
12-29-2016
06:15 AM
1 Kudo
This is an issue with Ambari version prior to 2.2.0. The article should have clarified it. The JIRA specifies that it is fixed in 2.2.0, however, search engines will not pull this link in searches as such the exposure of the article is extremely limited.
... View more
12-27-2016
08:06 PM
4 Kudos
SYMPTOM: Knox can get LDAP user but can't find related groups. Our LDAP is an openldap (REDHAT). The membership attribute is defined in groups with "uniquemember"
ERROR:
2016-05-09 14:42:01,229 INFO hadoop.gateway (KnoxLdapRealm.java:getUserDn(556)) - Computed userDn: uid=a196011,ou=people,dc=hadoop,dc=apache,dc=org using dnTemplate for principal: a196011
2016-05-09 14:42:01,230 INFO hadoop.gateway (KnoxLdapRealm.java:doGetAuthenticationInfo(180)) - Could not login: org.apache.shiro.authc.UsernamePasswordToken - a196xxx, rememberMe=false (10.xxx.xx.64)
2016-05-09 14:42:01,230 DEBUG hadoop.gateway (KnoxLdapRealm.java:doGetAuthenticationInfo(181)) - Failed to Authenticate with LDAP server: {1}
org.apache.shiro.authc.AuthenticationException: LDAP naming error while attempting to authenticate user.
at org.apache.shiro.realm.ldap.JndiLdapRealm.doGetAuthenticationInfo(JndiLdapRealm.java:303)
The above initial error was wrt ldap misconfiguration. Correcting ldap configuration below was the error -
"operation not supported in Standby mode"
2016-04-29 23:59:08,389 ERROR provider.BaseAuditHandler (BaseAuditHandler.java:logError(329)) - Error writing to log file.
java.lang.IllegalArgumentException: java.net.UnknownHostException: bigre7clu
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:406)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:311)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:678)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:619)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
at org.apache.ranger.audit.destination.HDFSAuditDestination.getLogFileStream(HDFSAuditDestination.java:221)
at org.apache.ranger.audit.destination.HDFSAuditDestination.logJSON(HDFSAuditDestination.java:123)
at org.apache.ranger.audit.queue.AuditFileSpool.sendEvent(AuditFileSpool.java:890)
at org.apache.ranger.audit.queue.AuditFileSpool.runDoAs(AuditFileSpool.java:838)
at org.apache.ranger.audit.queue.AuditFileSpool$2.run(AuditFileSpool.java:759)
at org.apache.ranger.audit.queue.AuditFileSpool$2.run(AuditFileSpool.java:757)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1637)
at org.apache.ranger.audit.queue.AuditFileSpool.run(AuditFileSpool.java:765)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.UnknownHostException: bigre7clu
ROOT CAUSE: Found that customer was having namenode HA and Knox was not configured with Namenode HA.
RESOLUTION: Configured Knox with HA for webhdfs which resolved the issue.
<provider>
<role>ha</role>
<name>HaProvider</name>
<enabled>true</enabled>
<param>
<name>WEBHDFS</name>
<value>maxFailoverAttempts=3;failoverSleep=1000;maxRetryAttempts=300;retrySleep=1000;enabled=true</value>
</param>
</provider>
<service>
<role>WEBHDFS</role>
<url>http://{host1}:50070/webhdfs</url>
<url>http://{host2}:50070/webhdfs</url>
</service>
... View more
Labels: