Member since
02-01-2018
52
Posts
2
Kudos Received
0
Solutions
11-29-2018
09:30 AM
1 Kudo
This is more Azure question. Take a look at this: http://www.florisvanderploeg.com/converting-azure-managed-disks-to-unmanaged/
... View more
10-18-2018
04:48 PM
The problem is that azure provider for packer does not allow you spinning up a VM based on a managed disk and burn vhd. So if you start with managed disk you can only produce another managed disk. Same with VHD. It's not the end of the world, but I had to add more complexity to my orchestration as I have to use a base corporate image which is a managed disk.
... View more
10-09-2018
11:26 AM
I have a HDP kerberized cluster with Ranger enabled, where the data is encrypted by KMS with multiple encryption zones. Users can only access data via Hive interface. In order to access all the data I can choose on of the options: 1. Give hive user an access to specific HDFS folders along with the permission to decrypt the data. However if hive user gets compromised, it will have an access to all the data. 2. Enable doAs option in Hive, and access HDFS as end user. This however will require policies for user on both: hdfs and hive and if user has an access to hdfs (for some reason) the hive permissions on column level becomes useless. What's the valid approach here?
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Ranger
10-09-2018
11:17 AM
Will Cloudbreak support Azure Managed disks for custom image catalog anytime soon? Currently using packer for building my base images for HDP, however it stores them in storage account, which is not recommended by Azure anymore and packer marked this strategy as deprecated already.
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
09-20-2018
09:41 AM
Hiya @Dominika Bialek I like this release because of the user authorisation model and operations audit logging. That's what is needed in the enterprise organisations. I'm using cbd quite extensively, is there any chance to you could share the roadmap for the next releases? Cheers!
... View more
09-17-2018
02:21 PM
Hi @mmolnar I've deployed Knox via Ambari and it works as expected. Now I have Cloudbreak and the cluster 100% behind the proxy.
... View more
09-17-2018
02:19 PM
This is very nice, but I went even further (I would even say much further) and created an Ansible role for CloudBreak configuration and cluster deployment. However can't share it yet as of now due to disclosure with my client.
... View more
09-08-2018
08:19 PM
Hi @pdarvasi I confirm, this workaround did the job for me. Thanks again. Also, I saw on github, there is work being done on 2.8 and even 2.9. Can I read somewhere about new features etc and maybe test a lil bit?
... View more
09-08-2018
10:58 AM
Hi, I'm using Cloudbreak 2.7.1 and I'm in a locked down environment with proxy set up. I was able to configure everything properly, cbd can talk to my cluster and the basic cluster was deployed successfully. However, that's not the case when I enable Knox Gateway on a cloudbreak level. The whole installation fails. I think it's a bug (knox is not ambari managed, so I guess it's installed in different way and it doesnt respect proxy env vars or similar) , which I don't know how to resolve. This is currently a blocker for me.
... View more
Labels:
- Labels:
-
Apache Knox
-
Hortonworks Cloudbreak
09-06-2018
08:13 AM
Hi guys. Thanks for your answers. It turned out, it was a postgres issue, which I still can't explain. Moved from the managed one to IaaS deployment. Thanks again.
... View more
08-26-2018
09:06 AM
Hi, I've installed hdp behind proxy but ranger admin fails to start. It times out when it applies java patch number 5. Worth to mention it uses external postgres in azure (using service endpoint). Here's the log
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Helvetica}
2018-08-26 08:47:46,205 [I] DB FLAVOR :POSTGRES
2018-08-26 08:47:46,205 [I] --------- Verifying Ranger DB connection ---------
2018-08-26 08:47:46,205 [I] Checking connection
2018-08-26 08:47:46,205 [JISQL] /usr/lib/jvm/java/bin/java -cp /usr/hdp/2.6.5.0-292/ranger-admin/ews/lib/postgresql-jdbc3.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://some-db.postgres.database.azure.com:5432/ranger -u someuser@some-db -p '********' -noheader -trim -c \; -query "SELECT 1;"
2018-08-26 08:47:47,585 [I] connection success
2018-08-26 08:47:47,585 [I] --------- Verifying version history table ---------
2018-08-26 08:47:47,585 [I] Verifying table x_db_version_h in database ranger
2018-08-26 08:47:47,585 [JISQL] /usr/lib/jvm/java/bin/java -cp /usr/hdp/2.6.5.0-292/ranger-admin/ews/lib/postgresql-jdbc3.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://some-db.postgres.database.azure.com:5432/ranger -u someuser@some-db -p '********' -noheader -trim -c \; -query "select * from (select table_name from information_schema.tables where table_catalog='ranger' and table_name = 'x_db_version_h') as temp;"
2018-08-26 08:47:48,332 [I] Table x_db_version_h already exists in database ranger
2018-08-26 08:47:48,332 [I] --------- Importing Ranger Core DB Schema ---------
2018-08-26 08:47:48,332 [JISQL] /usr/lib/jvm/java/bin/java -cp /usr/hdp/2.6.5.0-292/ranger-admin/ews/lib/postgresql-jdbc3.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://some-db.postgres.database.azure.com:5432/ranger -u someuser@some-db -p '********' -noheader -trim -c \; -query "select version from x_db_version_h where version = 'CORE_DB_SCHEMA' and active = 'Y';"
2018-08-26 08:47:49,064 [I] CORE_DB_SCHEMA is already imported
2018-08-26 08:47:49,065 [JISQL] /usr/lib/jvm/java/bin/java -cp /usr/hdp/2.6.5.0-292/ranger-admin/ews/lib/postgresql-jdbc3.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://some-db.postgres.database.azure.com:5432/ranger -u someuser@some-db -p '********' -noheader -trim -c \; -query "select version from x_db_version_h where version = 'DB_PATCHES' and inst_by = 'Ranger 0.7.0.2.6.5.0-292' and active = 'Y';"
2018-08-26 08:47:49,797 [I] DB_PATCHES have already been applied
2018-08-26 08:47:49,807 - Directory['/usr/hdp/current/ranger-admin/conf'] {'owner': 'ranger', 'group': 'ranger', 'create_parents': True}
2018-08-26 08:47:49,808 - File['/var/lib/ambari-agent/tmp/postgresql-jdbc3.jar'] {'content': DownloadSource('http://az3pb-p02-m1.data.some.gov.uk:8080/resources/postgresql-jdbc3.jar'), 'mode': 0644}
2018-08-26 08:47:49,808 - Not downloading the file from http://az3pb-p02-m1.data.some.gov.uk:8080/resources/postgresql-jdbc3.jar, because /var/lib/ambari-agent/tmp/postgresql-jdbc3.jar already exists
2018-08-26 08:47:49,810 - Execute[('cp', '--remove-destination', u'/var/lib/ambari-agent/tmp/postgresql-jdbc3.jar', u'/usr/hdp/current/ranger-admin/ews/lib')] {'path': ['/bin', '/usr/bin/'], 'sudo': True}
2018-08-26 08:47:49,819 - File['/usr/hdp/current/ranger-admin/ews/lib/postgresql-jdbc3.jar'] {'mode': 0644}
2018-08-26 08:47:49,820 - ModifyPropertiesFile['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'properties': ...}
2018-08-26 08:47:49,820 - Modifying existing properties file: /usr/hdp/current/ranger-admin/install.properties
2018-08-26 08:47:49,828 - File['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'content': ..., 'group': None, 'mode': None, 'encoding': 'utf-8'}
2018-08-26 08:47:49,829 - Writing File['/usr/hdp/current/ranger-admin/install.properties'] because contents don't match
2018-08-26 08:47:49,829 - ModifyPropertiesFile['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'properties': {'SQL_CONNECTOR_JAR': u'/usr/hdp/current/ranger-admin/ews/lib/postgresql-jdbc3.jar'}}
2018-08-26 08:47:49,830 - Modifying existing properties file: /usr/hdp/current/ranger-admin/install.properties
2018-08-26 08:47:49,831 - File['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'content': ..., 'group': None, 'mode': None, 'encoding': 'utf-8'}
2018-08-26 08:47:49,832 - File['/usr/lib/ambari-agent/DBConnectionVerification.jar'] {'content': DownloadSource('http://az3pb-p02-m1.data.some.gov.uk:8080/resources/DBConnectionVerification.jar'), 'mode': 0644}
2018-08-26 08:47:49,832 - Not downloading the file from http://az3pb-p02-m1.data.some.gov.uk:8080/resources/DBConnectionVerification.jar, because /var/lib/ambari-agent/tmp/DBConnectionVerification.jar already exists
2018-08-26 08:47:49,832 - Execute['/usr/lib/jvm/java/bin/java -cp /usr/lib/ambari-agent/DBConnectionVerification.jar:/usr/hdp/current/ranger-admin/ews/lib/postgresql-jdbc3.jar:/usr/hdp/current/ranger-admin/ews/lib/* org.apache.ambari.server.DBConnectionVerification 'jdbc:postgresql://some-db.postgres.database.azure.com:5432/ranger' someuser@some-db [PROTECTED] org.postgresql.Driver'] {'environment': {}, 'path': ['/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin'], 'tries': 5, 'try_sleep': 10}
2018-08-26 08:47:50,563 - Execute[('ln', '-sf', u'/usr/hdp/current/ranger-admin/ews/webapp/WEB-INF/classes/conf', u'/usr/hdp/current/ranger-admin/conf')] {'not_if': 'ls /usr/hdp/current/ranger-admin/conf', 'sudo': True, 'only_if': 'ls /usr/hdp/current/ranger-admin/ews/webapp/WEB-INF/classes/conf'}
2018-08-26 08:47:50,569 - Skipping Execute[('ln', '-sf', u'/usr/hdp/current/ranger-admin/ews/webapp/WEB-INF/classes/conf', u'/usr/hdp/current/ranger-admin/conf')] due to not_if
2018-08-26 08:47:50,569 - Directory['/usr/hdp/current/ranger-admin/'] {'owner': 'ranger', 'group': 'ranger', 'recursive_ownership': True}
2018-08-26 08:47:50,616 - Directory['/var/run/ranger'] {'owner': 'ranger', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2018-08-26 08:47:50,618 - File['/usr/hdp/current/ranger-admin/conf/ranger-admin-env.sh'] {'owner': 'ranger', 'content': 'export JAVA_HOME=/usr/lib/jvm/java', 'group': 'ranger', 'mode': 0755}
2018-08-26 08:47:50,618 - File['/usr/hdp/current/ranger-admin/conf/ranger-admin-env-piddir.sh'] {'owner': 'ranger', 'content': 'export RANGER_PID_DIR_PATH=/var/run/ranger\nexport RANGER_USER=ranger', 'group': 'ranger', 'mode': 0755}
2018-08-26 08:47:50,619 - Directory['/var/log/ranger/admin'] {'owner': 'ranger', 'group': 'ranger', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2018-08-26 08:47:50,620 - File['/usr/hdp/current/ranger-admin/conf/ranger-admin-env-logdir.sh'] {'owner': 'ranger', 'content': 'export RANGER_ADMIN_LOG_DIR=/var/log/ranger/admin', 'group': 'ranger', 'mode': 0755}
2018-08-26 08:47:50,621 - File['/usr/hdp/current/ranger-admin/conf/ranger-admin-default-site.xml'] {'owner': 'ranger', 'group': 'ranger'}
2018-08-26 08:47:50,621 - File['/usr/hdp/current/ranger-admin/conf/security-applicationContext.xml'] {'owner': 'ranger', 'group': 'ranger'}
2018-08-26 08:47:50,622 - Execute[('ln', '-sf', u'/usr/hdp/current/ranger-admin/ews/ranger-admin-services.sh', '/usr/bin/ranger-admin')] {'not_if': 'ls /usr/bin/ranger-admin', 'sudo': True, 'only_if': 'ls /usr/hdp/current/ranger-admin/ews/ranger-admin-services.sh'}
2018-08-26 08:47:50,626 - Skipping Execute[('ln', '-sf', u'/usr/hdp/current/ranger-admin/ews/ranger-admin-services.sh', '/usr/bin/ranger-admin')] due to not_if
2018-08-26 08:47:50,626 - XmlConfig['ranger-admin-site.xml'] {'group': 'ranger', 'conf_dir': '/usr/hdp/current/ranger-admin/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'ranger', 'configurations': ...}
2018-08-26 08:47:50,636 - Generating config: /usr/hdp/current/ranger-admin/conf/ranger-admin-site.xml
2018-08-26 08:47:50,636 - File['/usr/hdp/current/ranger-admin/conf/ranger-admin-site.xml'] {'owner': 'ranger', 'content': InlineTemplate(...), 'group': 'ranger', 'mode': 0644, 'encoding': 'UTF-8'}
2018-08-26 08:47:50,698 - Directory['/usr/hdp/current/ranger-admin/conf/ranger_jaas'] {'owner': 'ranger', 'group': 'ranger', 'mode': 0700}
2018-08-26 08:47:50,701 - File['/usr/hdp/current/ranger-admin/ews/webapp/WEB-INF/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'ranger', 'group': 'ranger', 'mode': 0644}
2018-08-26 08:47:50,702 - Execute[(u'/usr/lib/jvm/java/bin/java', '-cp', u'/usr/hdp/current/ranger-admin/cred/lib/*', 'org.apache.ranger.credentialapi.buildks', 'create', u'rangeradmin', '-value', [PROTECTED], '-provider', u'jceks://file/etc/ranger/admin/rangeradmin.jceks')] {'logoutput': True, 'environment': {'JAVA_HOME': u'/usr/lib/jvm/java'}, 'sudo': True}
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Aug 26, 2018 8:47:51 AM org.apache.hadoop.util.NativeCodeLoader <clinit>
WARNING: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
The alias rangeradmin already exists!! Will try to delete first.
FOUND value of [interactive] field in the Class [org.apache.hadoop.security.alias.CredentialShell] = [true]
Deleting credential: rangeradmin from CredentialProvider: jceks://file/etc/ranger/admin/rangeradmin.jceks
rangeradmin has been successfully deleted.
org.apache.hadoop.security.alias.JavaKeyStoreProvider has been updated.
rangeradmin has been successfully created.
org.apache.hadoop.security.alias.JavaKeyStoreProvider has been updated.
2018-08-26 08:47:51,828 - File['/etc/ranger/admin/rangeradmin.jceks'] {'owner': 'ranger', 'group': 'ranger', 'mode': 0640}
2018-08-26 08:47:51,829 - Execute[(u'/usr/lib/jvm/java/bin/java', '-cp', u'/usr/hdp/current/ranger-admin/cred/lib/*', 'org.apache.ranger.credentialapi.buildks', 'create', u'ranger.ldap.ad.bind.password', '-value', [PROTECTED], '-provider', u'jceks://file/etc/ranger/admin/rangeradmin.jceks')] {'logoutput': True, 'environment': {'JAVA_HOME': u'/usr/lib/jvm/java'}, 'sudo': True}
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Aug 26, 2018 8:47:52 AM org.apache.hadoop.util.NativeCodeLoader <clinit>
WARNING: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
The alias ranger.ldap.ad.bind.password already exists!! Will try to delete first.
FOUND value of [interactive] field in the Class [org.apache.hadoop.security.alias.CredentialShell] = [true]
Deleting credential: ranger.ldap.ad.bind.password from CredentialProvider: jceks://file/etc/ranger/admin/rangeradmin.jceks
ranger.ldap.ad.bind.password has been successfully deleted.
org.apache.hadoop.security.alias.JavaKeyStoreProvider has been updated.
ranger.ldap.ad.bind.password has been successfully created.
org.apache.hadoop.security.alias.JavaKeyStoreProvider has been updated.
2018-08-26 08:47:52,892 - File['/etc/ranger/admin/rangeradmin.jceks'] {'owner': 'ranger', 'group': 'ranger', 'mode': 0640}
2018-08-26 08:47:52,893 - Execute[(u'/usr/lib/jvm/java/bin/java', '-cp', u'/usr/hdp/current/ranger-admin/cred/lib/*', 'org.apache.ranger.credentialapi.buildks', 'create', u'trustStoreAlias', '-value', [PROTECTED], '-provider', u'jceks://file/etc/ranger/admin/rangeradmin.jceks')] {'logoutput': True, 'environment': {'JAVA_HOME': u'/usr/lib/jvm/java'}, 'sudo': True}
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Aug 26, 2018 8:47:53 AM org.apache.hadoop.util.NativeCodeLoader <clinit>
WARNING: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
The alias trustStoreAlias already exists!! Will try to delete first.
FOUND value of [interactive] field in the Class [org.apache.hadoop.security.alias.CredentialShell] = [true]
Deleting credential: trustStoreAlias from CredentialProvider: jceks://file/etc/ranger/admin/rangeradmin.jceks
trustStoreAlias has been successfully deleted.
org.apache.hadoop.security.alias.JavaKeyStoreProvider has been updated.
trustStoreAlias has been successfully created.
org.apache.hadoop.security.alias.JavaKeyStoreProvider has been updated.
2018-08-26 08:47:53,974 - File['/etc/ranger/admin/rangeradmin.jceks'] {'owner': 'ranger', 'group': 'ranger', 'mode': 0640}
2018-08-26 08:47:53,975 - XmlConfig['core-site.xml'] {'group': 'ranger', 'conf_dir': '/usr/hdp/current/ranger-admin/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'ranger', 'configurations': ...}
2018-08-26 08:47:53,984 - Generating config: /usr/hdp/current/ranger-admin/conf/core-site.xml
2018-08-26 08:47:53,984 - File['/usr/hdp/current/ranger-admin/conf/core-site.xml'] {'owner': 'ranger', 'content': InlineTemplate(...), 'group': 'ranger', 'mode': 0644, 'encoding': 'UTF-8'}
2018-08-26 08:47:54,034 - Execute['ambari-python-wrap /usr/hdp/current/ranger-admin/db_setup.py -javapatch'] {'logoutput': True, 'environment': {'RANGER_ADMIN_HOME': u'/usr/hdp/current/ranger-admin', 'JAVA_HOME': u'/usr/lib/jvm/java'}, 'user': 'ranger'}
2018-08-26 08:47:54,434 [I] DB FLAVOR :POSTGRES
2018-08-26 08:47:54,434 [I] --------- Verifying Ranger DB connection ---------
2018-08-26 08:47:54,434 [I] Checking connection
2018-08-26 08:47:54,435 [JISQL] /usr/lib/jvm/java/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/postgresql-jdbc3.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://some-db.postgres.database.azure.com:5432/ranger -u someuser@some-db -p '********' -noheader -trim -c \; -query "SELECT 1;"
2018-08-26 08:47:55,187 [I] connection success
2018-08-26 08:47:55,187 [JISQL] /usr/lib/jvm/java/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/postgresql-jdbc3.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://some-db.postgres.database.azure.com:5432/ranger -u someuser@some-db -p '********' -noheader -trim -c \; -query "select version from x_db_version_h where version = 'JAVA_PATCHES' and inst_by = 'Ranger 0.7.0.2.6.5.0-292' and active = 'Y';"
2018-08-26 08:47:55,960 [I] ----------------- Applying java patches ------------
2018-08-26 08:47:55,961 [JISQL] /usr/lib/jvm/java/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/postgresql-jdbc3.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://some-db.postgres.database.azure.com:5432/ranger -u someuser@some-db -p '********' -noheader -trim -c \; -query "select version from x_db_version_h where version = 'J10001' and active = 'Y';"
2018-08-26 08:47:56,742 [I] Java patch PatchPasswordEncryption_J10001 is already applied
2018-08-26 08:47:56,742 [JISQL] /usr/lib/jvm/java/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/postgresql-jdbc3.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://some-db.postgres.database.azure.com:5432/ranger -u someuser@some-db -p '********' -noheader -trim -c \; -query "select version from x_db_version_h where version = 'J10002' and active = 'Y';"
2018-08-26 08:47:57,534 [I] Java patch PatchMigration_J10002 is already applied
2018-08-26 08:47:57,534 [JISQL] /usr/lib/jvm/java/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/postgresql-jdbc3.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://some-db.postgres.database.azure.com:5432/ranger -u someuser@some-db -p '********' -noheader -trim -c \; -query "select version from x_db_version_h where version = 'J10003' and active = 'Y';"
2018-08-26 08:47:58,304 [I] Java patch PatchPersmissionModel_J10003 is already applied
2018-08-26 08:47:58,304 [JISQL] /usr/lib/jvm/java/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/postgresql-jdbc3.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://some-db.postgres.database.azure.com:5432/ranger -u someuser@some-db -p '********' -noheader -trim -c \; -query "select version from x_db_version_h where version = 'J10004' and active = 'Y';"
2018-08-26 08:47:59,030 [I] Java patch PatchForServiceVersionInfo_J10004 is already applied
2018-08-26 08:47:59,030 [JISQL] /usr/lib/jvm/java/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/postgresql-jdbc3.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://some-db.postgres.database.azure.com:5432/ranger -u someuser@some-db -p '********' -noheader -trim -c \; -query "select version from x_db_version_h where version = 'J10005' and active = 'Y';"
2018-08-26 08:47:59,807 [JISQL] /usr/lib/jvm/java/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/postgresql-jdbc3.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://some-db.postgres.database.azure.com:5432/ranger -u someuser@some-db -p '********' -noheader -trim -c \; -query "select version from x_db_version_h where version = 'J10005' and active = 'N';"
2018-08-26 08:48:00,534 [I] Java patch PatchTagModulePermission_J10005 is being applied by some other process
2018-08-26 08:50:00,624 [JISQL] /usr/lib/jvm/java/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/postgresql-jdbc3.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://some-db.postgres.database.azure.com:5432/ranger -u someuser@some-db -p '********' -noheader -trim -c \; -query "select version from x_db_version_h where version = 'J10005' and active = 'N';"
2018-08-26 08:50:01,707 [I] Java patch PatchTagModulePermission_J10005 is being applied by some other process
2018-08-26 08:52:01,807 [JISQL] /usr/lib/jvm/java/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/postgresql-jdbc3.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://some-db.postgres.database.azure.com:5432/ranger -u someuser@some-db -p '********' -noheader -trim -c \; -query "select version from x_db_version_h where version = 'J10005' and active = 'N';"
2018-08-26 08:52:03,038 [I] Java patch PatchTagModulePermission_J10005 is being applied by some other process
2018-08-26 08:54:03,134 [JISQL] /usr/lib/jvm/java/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/postgresql-jdbc3.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://some-db.postgres.database.azure.com:5432/ranger -u someuser@some-db -p '********' -noheader -trim -c \; -query "select version from x_db_version_h where version = 'J10005' and active = 'N';"
2018-08-26 08:54:04,197 [I] Java patch PatchTagModulePermission_J10005 is being applied by some other process
2018-08-26 08:56:04,280 [JISQL] /usr/lib/jvm/java/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/postgresql-jdbc3.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://some-db.postgres.database.azure.com:5432/ranger -u someuser@some-db -p '********' -noheader -trim -c \; -query "select version from x_db_version_h where version = 'J10005' and active = 'N';"
2018-08-26 08:56:05,161 [I] Java patch PatchTagModulePermission_J10005 is being applied by some other process
... View more
Labels:
- Labels:
-
Apache Ranger
08-22-2018
02:38 PM
Hi, I have deployed an HDP cluster using CloudBreak 2.7.1 with SSO enabled. I have nginx on top of with SSL enabled, which then connects to knox via 8443 This works fine in a browser apart from following issues:
when hitting ambari ([cluster-name]/[gateway-name]/amabari) it redirects to knox sso using private IP, which obviously fails (as it's not accessable). This however works fine with Ranger (talking about redirection bit). there's no way to login to Ranger with admin account, because it's using ldap knox, so local users are disabled. How can then I access admin area to for example delegate permissions? WebHDFS issue: When curling to webhdfs (via nginx to knox), I get this: curl -ik --negotiate -u : https://example.com/cluster/gateway/webhdfs/v1//?op=LISTSTATUS
HTTP/1.1 302 Found
Server: nginx
Date: Wed, 22 Aug 2018 14:14:54 GMT
Content-Length: 0
Connection: keep-alive
Location: https://example.com/cluster/sso/api/v1/websso?originalUrl=https://aexample.com/cluster/gateway/webhdfs/v1/?op=LISTSTATUS
Why does it redirects me? I want to provide my credentials either via kerberos or just ldap user name and password. Or maybe my nginx config is missing something? server {
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/ssl/example.com.pem;
ssl_certificate_key /etc/ssl/example.com.key;
location / {
proxy_pass https://<ip>:8443/;
# this magic is needed for WebSocket
proxy_http_version 1.1;
proxy_redirect off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
}
}
... View more
Labels:
08-22-2018
02:26 PM
Wow, this is exactly what I was trying to achieve. When I have a time I'll test this and let you know as for now I went different route. Thanks!
... View more
08-18-2018
08:33 AM
I'm using CloidBreak for deploying hdp clusters. However the requirement is to do it based on hardened image that follows CIS guidelines, so I wanted to use their CentOS 7 image from Azure Marketplace. I did all prerequisites (packer built me an image, I've added new image catalog etc). However, as you can already suspect, the deployment failed because of that: {
"status": "Failed",
"error": {
"code": "ResourceDeploymentFailure",
"message": "The resource operation completed with terminal provisioning state 'Failed'.",
"details": [
{
"code": "VMMarketplaceInvalidInput",
"message": "Creating a virtual machine from Marketplace image requires Plan information in the request. VM: '/subscriptions/xxxx/resourceGroups/xxxx/providers/Microsoft.Compute/virtualMachines/xxxxxxxm1'."
}
]
} This is because ARM template does not have plan information, that is required when deploying VMs from marketplace. Unfortunately and super sadly I can't see a workaround for that unless I modify cloudbreak itself...
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
08-14-2018
12:22 PM
Hi @Geoffrey Shelton Okot Today I r an access to corp DC, so I will be testing this. I will update you as soon as I learn more about that. Many thanks though!
... View more
08-14-2018
05:57 AM
Hi, By open open ports I mean communication between nodes for hadoop operations.
... View more
08-13-2018
03:44 PM
Hi, So in ldap configuration in CB 2.7.1, you can specify an admin group. What does it do and what format should it be? just a group name, or full distinguished name? My problem is, that if I enable SSO in cloudbreak, I'm not able to login to Ranger od Ambari as administrator. How can I make sure, that some of my AD users can access admin area?
... View more
Labels:
08-13-2018
03:41 PM
Thans @pdarvasi. I will let you know how it did go. Also do you mean, that cb scripts will open required ports etc? Cheers!
... View more
08-13-2018
10:22 AM
Hi, I want to deploy HDP 2.6 (using cloudbreak 2.7.1) using prehardened images by CIS in Azure. What should I take into consideration? By default only port 22 is opened by iptables. (I also struggle to deploy cbd 2.7 on that image as docker containers seem to not talk to each other, but will raise another issue if I won't have further luck with it).
... View more
Labels:
08-13-2018
09:57 AM
Hi, is it possible to achieve? I want to be able to authenticate against test1.example.com (where ambari created service principles) and test2.example.com where I have also some other users. I use Active Directory.
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
07-27-2018
11:05 AM
I have configured HDP with kerberos for the realm DATA.EXAMPLE.COM This generated this config at /etc/krb5.conf [domain_realm]
.data.example.com = DATA.EXAMPLE.COM
[realms]
DATA.EXAMPLE.COM = {
admin_server = data.example.com kdc = data.example.com
}
However I need to allow corp users as well to use kinit, so I've modified this to: [domain_realm]
.data.example.com = DATA.EXAMPLE.COM .corp.example.com = CORP.EXAMPLE.COM
[realms]
DATA.EXAMPLE.COM = {
admin_server = data.example.com kdc = data.example.com
}
CORP.EXAMPLE.COM = {
admin_server = corp.example.com
kdc = corp.example.com
}
So this works in terms of authenticating using kinit. But... It does not, when I try to interact with the cluster. Whenever I type hdfs dfs -ls / I get this message: 18/07/27 11:01:24 INFO util.KerberosName: No auth_to_local rules applied to user@CORP.EXAMPLE.COM
18/07/27 11:01:29 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 60 seconds before. Last Login=1532689285894
18/07/27 11:01:33 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 60 seconds before. Last Login=1532689285894
18/07/27 11:01:34 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 60 seconds before. Last Login=1532689285894
18/07/27 11:01:39 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 60 seconds before. Last Login=1532689285894
18/07/27 11:01:43 WARN ipc.Client: Couldn't setup connection for user@CORP.EXAMPLE.COM to ds-beta-prod-02-m3.data.example.com/10.251.2.76:8020
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Fail to create credential. (63) - No service creds)]
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:414)
at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:595)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:397)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:762)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:758)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:758)
at org.apache.hadoop.ipc.Client$Connection.access$3200(Client.java:397)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1620)
at org.apache.hadoop.ipc.Client.call(Client.java:1451)
at org.apache.hadoop.ipc.Client.call(Client.java:1398)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:823)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:290)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:202)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:184)
at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2177)
at org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1442)
at org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1438)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1454)
at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
at org.apache.hadoop.fs.Globber.glob(Globber.java:265)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1697)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:326)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:297)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:356)
Caused by: GSSException: No valid credentials provided (Mechanism level: Fail to create credential. (63) - No service creds)
at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:770)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
... 41 more
Caused by: KrbException: Fail to create credential. (63) - No service creds
at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:162)
at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:458)
at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:693)
... 44 more
ls: Failed on local exception: java.io.IOException: Couldn't setup connection for user@CORP.EXAMPLE.COM to ds-beta-prod-02-m3.data.example.com/10.251.2.76:8020; Host Details : local host is: "ds-beta-prod-02-m2.data.exmple.com/10.251.2.74"; destination host is: "ds-beta-prod-02-m3.data.example.com":8020;
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
06-28-2018
09:06 AM
Hi @Jay Kumar SenSharma Yes, I'm aware of those settings and as I said (the link above also is mentioning this) I would like to avoid changing the default value. My question is more about, why Cloudbreak 2.7 creates service principles with low IDs when I enable kerberos?
... View more
06-28-2018
08:29 AM
Hi I'm using CloudBreak 2.7 for deploying my clusters. For the record: I didn't have that issue with CBD 2.6 Issue: In my kerberized cluster, when trying to start Hive Interactive I get this error: Requested user hive is not whitelisted and has id 982,which is below the minimum allowed 1000 When I check /etc/passwd I can see that half of HDP services are below 1000 and some are above, so this error message is valid. For security I don't wont to decrease a minimal value for this. Is there a fix for that? Many thanks in advance.
... View more
Labels:
- Labels:
-
Apache Hive
-
Hortonworks Cloudbreak
06-28-2018
07:55 AM
Hi @mmolnar Yea, won't forget. I've automated this process, so in my case it's just changing a value of my Ansible variable. Unfortunately I observed another issue In my kerberized cluster, when trying to start Hive Interactive I get this error: Requested user hive is not whitelisted and has id 982,which is below the minimum allowed 1000 When I checked /etc/passwd I saw that half of HDP services are below 1000 and some are above. This is again after an upgrade from 2.6. Should I create a new thread for that issue? Thanks
... View more
06-27-2018
10:27 PM
Hi @mmolnar I can confirm that the fix in that release candidate works for me. Thanks a lot.
... View more
06-27-2018
10:05 PM
Hi @mmolnar I can confirm that after adding this env variable I don't have this issue anymore and I'm on cbd 2.7.1-rc.13 Thank you!
... View more
06-27-2018
03:26 PM
Hi @mmolnar, Thanks for getting back to me as I'm under huge pressure due to deadlines. This time I've named my test cluster "asdtesttest" (I know not descriptive name) hostname -d Returns nothing hostname -f Returns: asdtesttest-m0 Files you've asked are attached. logs.zip
... View more
06-27-2018
03:05 PM
I confirm, that this is only an issue, when my VNET is using custom DNS (like those provided from AADDS). CloudBreak 2.6 was using unbound service, and hosts could communicate with each other using "example.com". Seems like it's not the case anymore, or there's a missing configuration. This is a massive blocker for us.
... View more
06-27-2018
02:35 PM
Hi, Yesterday I've upgraded CBD from 2.6 to 2.7 and I'm getting a lot of issues. Weird, given 2.7 is in GA and 2.6 is not. I'm trying to deploy my HDP cluster to a subnet, which is using Azure AD Domain Services DNS. On deployment I disabled public IPs as well. This is what I'm getting from logs, when deploying default blueprint with default options onto my private subnet: cloudbreak_1 | 2018-06-27 14:34:12,803 [containerBootstrapBuilderExecutor-18] doCall:85 INFO c.s.c.o.OrchestratorBootstrapRunner - [owner:15adc8a5-f35f-4e42-b1da-2567ccad0c59] [type:STACK] [id:13] [name:dsbeta01] [flow:9c9705c6-d536-4fb7-8bf1-f1075b5d8ff0] [tracking:] Calling orchestrator bootstrap: Salt, additional info: SaltBootstrap{sc=com.sequenceiq.cloudbreak.orchestrator.salt.client.SaltConnector@3172c1ff, allGatewayConfigs=[GatewayConfig{connectionAddress='10.251.3.69', publicAddress='10.251.3.69', privateAddress='10.251.3.69', hostname='null', gatewayPort=9443, knoxGatewayEnabled=true, primary=true}], originalTargets=[Node{privateIp='10.251.3.69', publicIp='10.251.3.69', hostname='', domain='null', hostGroup='master', dataVolumes=null}, Node{privateIp='10.251.3.68', publicIp='10.251.3.68', hostname='', domain='null', hostGroup='worker', dataVolumes=null}], targets=[Node{privateIp='10.251.3.69', publicIp='10.251.3.69', hostname='', domain='null', hostGroup='master', dataVolumes=null}, Node{privateIp='10.251.3.68', publicIp='10.251.3.68', hostname='', domain='null', hostGroup='worker', dataVolumes=null}]}
cloudbreak_1 | 2018-06-27 14:34:12,805 [containerBootstrapBuilderExecutor-18] call:55 INFO c.s.c.o.s.p.SaltBootstrap - [owner:15adc8a5-f35f-4e42-b1da-2567ccad0c59] [type:STACK] [id:13] [name:dsbeta01] [flow:9c9705c6-d536-4fb7-8bf1-f1075b5d8ff0] [tracking:] Bootstrapping of nodes [0/2]
cloudbreak_1 | 2018-06-27 14:34:12,806 [containerBootstrapBuilderExecutor-18] call:57 INFO c.s.c.o.s.p.SaltBootstrap - [owner:15adc8a5-f35f-4e42-b1da-2567ccad0c59] [type:STACK] [id:13] [name:dsbeta01] [flow:9c9705c6-d536-4fb7-8bf1-f1075b5d8ff0] [tracking:] Missing targets for SaltBootstrap: [Node{privateIp='10.251.3.69', publicIp='10.251.3.69', hostname='', domain='null', hostGroup='master', dataVolumes=null}, Node{privateIp='10.251.3.68', publicIp='10.251.3.68', hostname='', domain='null', hostGroup='worker', dataVolumes=null}]
cloudbreak_1 | 2018-06-27 14:34:12,827 [containerBootstrapBuilderExecutor-18] lambda$hostnameVerifier$0:28 INFO c.s.c.c.CertificateTrustManager - [owner:15adc8a5-f35f-4e42-b1da-2567ccad0c59] [type:STACK] [id:13] [name:dsbeta01] [flow:9c9705c6-d536-4fb7-8bf1-f1075b5d8ff0] [tracking:] verify hostname: 10.251.3.69
cloudbreak_1 | 2018-06-27 14:34:12,849 [containerBootstrapBuilderExecutor-18] action:119 INFO c.s.c.o.s.c.SaltConnector - [owner:15adc8a5-f35f-4e42-b1da-2567ccad0c59] [type:STACK] [id:13] [name:dsbeta01] [flow:9c9705c6-d536-4fb7-8bf1-f1075b5d8ff0] [tracking:] SaltBoot. SaltAction response: SaltBootResponses{responses=[SaltBootResponse{status='', address='10.251.3.69', statusCode=500, version='null', errorText='it is expected to have a default domain, but it is empty'}, SaltBootResponse{status='', address='10.251.3.68', statusCode=500, version='null', errorText='it is expected to have a default domain, but it is empty'}, SaltBootResponse{status='', address='10.251.3.69', statusCode=500, version='null', errorText='it is expected to have a default domain, but it is empty'}]}
cloudbreak_1 | 2018-06-27 14:34:12,851 [containerBootstrapBuilderExecutor-18] call:64 INFO c.s.c.o.s.p.SaltBootstrap - [owner:15adc8a5-f35f-4e42-b1da-2567ccad0c59] [type:STACK] [id:13] [name:dsbeta01] [flow:9c9705c6-d536-4fb7-8bf1-f1075b5d8ff0] [tracking:] SaltBootstrap responses: SaltBootResponses{responses=[SaltBootResponse{status='', address='10.251.3.69', statusCode=500, version='null', errorText='it is expected to have a default domain, but it is empty'}, SaltBootResponse{status='', address='10.251.3.68', statusCode=500, version='null', errorText='it is expected to have a default domain, but it is empty'}, SaltBootResponse{status='', address='10.251.3.69', statusCode=500, version='null', errorText='it is expected to have a default domain, but it is empty'}]}
cloudbreak_1 | 2018-06-27 14:34:12,852 [containerBootstrapBuilderExecutor-18] call:67 INFO c.s.c.o.s.p.SaltBootstrap - [owner:15adc8a5-f35f-4e42-b1da-2567ccad0c59] [type:STACK] [id:13] [name:dsbeta01] [flow:9c9705c6-d536-4fb7-8bf1-f1075b5d8ff0] [tracking:] Failed to distributed salt run to: 10.251.3.69
cloudbreak_1 | 2018-06-27 14:34:12,853 [containerBootstrapBuilderExecutor-18] call:67 INFO c.s.c.o.s.p.SaltBootstrap - [owner:15adc8a5-f35f-4e42-b1da-2567ccad0c59] [type:STACK] [id:13] [name:dsbeta01] [flow:9c9705c6-d536-4fb7-8bf1-f1075b5d8ff0] [tracking:] Failed to distributed salt run to: 10.251.3.68
cloudbreak_1 | 2018-06-27 14:34:12,853 [containerBootstrapBuilderExecutor-18] call:67 INFO c.s.c.o.s.p.SaltBootstrap - [owner:15adc8a5-f35f-4e42-b1da-2567ccad0c59] [type:STACK] [id:13] [name:dsbeta01] [flow:9c9705c6-d536-4fb7-8bf1-f1075b5d8ff0] [tracking:] Failed to distributed salt run to: 10.251.3.69
cloudbreak_1 | 2018-06-27 14:34:12,854 [containerBootstrapBuilderExecutor-18] call:75 INFO c.s.c.o.s.p.SaltBootstrap - [owner:15adc8a5-f35f-4e42-b1da-2567ccad0c59] [type:STACK] [id:13] [name:dsbeta01] [flow:9c9705c6-d536-4fb7-8bf1-f1075b5d8ff0] [tracking:] Missing nodes to run saltbootstrap: [Node{privateIp='10.251.3.69', publicIp='10.251.3.69', hostname='', domain='null', hostGroup='master', dataVolumes=null}, Node{privateIp='10.251.3.68', publicIp='10.251.3.68', hostname='', domain='null', hostGroup='worker', dataVolumes=null}]
cloudbreak_1 | 2018-06-27 14:34:12,855 [containerBootstrapBuilderExecutor-18] doCall:111 WARN c.s.c.o.OrchestratorBootstrapRunner - [owner:15adc8a5-f35f-4e42-b1da-2567ccad0c59] [type:STACK] [id:13] [name:dsbeta01] [flow:9c9705c6-d536-4fb7-8bf1-f1075b5d8ff0] [tracking:] Orchestrator component Salt failed to start, retrying [60/90], error count [60/90]. Elapsed time: 52 ms, Total elapsed time: 593529 ms, Reason: com.sequenceiq.cloudbreak.orchestrator.exception.CloudbreakOrchestratorFailedException: There are missing nodes from saltbootstrap: [Node{privateIp='10.251.3.69', publicIp='10.251.3.69', hostname='', domain='null', hostGroup='master', dataVolumes=null}, Node{privateIp='10.251.3.68', publicIp='10.251.3.68', hostname='', domain='null', hostGroup='worker', dataVolumes=null}], additional info: SaltBootstrap{sc=com.sequenceiq.cloudbreak.orchestrator.salt.client.SaltConnector@3172c1ff, allGatewayConfigs=[GatewayConfig{connectionAddress='10.251.3.69', publicAddress='10.251.3.69', privateAddress='10.251.3.69', hostname='null', gatewayPort=9443, knoxGatewayEnabled=true, primary=true}], originalTargets=[Node{privateIp='10.251.3.69', publicIp='10.251.3.69', hostname='', domain='null', hostGroup='master', dataVolumes=null}, Node{privateIp='10.251.3.68', publicIp='10.251.3.68', hostname='', domain='null', hostGroup='worker', dataVolumes=null}], targets=[Node{privateIp='10.251.3.69', publicIp='10.251.3.69', hostname='', domain='null', hostGroup='master', dataVolumes=null}, Node{privateIp='10.251.3.68', publicIp='10.251.3.68', hostname='', domain='null', hostGroup='worker', dataVolumes=null}]}
cloudbreak_1 | 2018-06-27 14:34:12,867 [http-nio-8080-exec-2] getAllForAutoscale:170 INFO c.s.c.s.StackCommonService - [owner:undefined] [type:StackV1] [id:] [name:] [flow:] [tracking:] Get all stack, autoscale authorized only.
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
06-27-2018
09:23 AM
Hi,
I'm using Cloudbreak Deployer: 2.7.0
Whenever I include ATLAS_SERVER in my blueprint, I get the following error on deployment / or when I click "show generated blueprint": Caused by: com.fasterxml.jackson.core.JsonParseException: Unexpected character ('}' (code 125)): was expecting double-quote to start field name
cloudbreak_1 | at [Source: {
cloudbreak_1 | "application-properties": {
cloudbreak_1 | "properties": {
cloudbreak_1 |
cloudbreak_1 | "atlas.authorizer.impl": "ranger",
cloudbreak_1 |
cloudbreak_1 |
cloudbreak_1 | }
cloudbreak_1 | }
cloudbreak_1 | ,
cloudbreak_1 | "ranger-atlas-plugin-properties": {
cloudbreak_1 | "properties": {
cloudbreak_1 | "ranger-atlas-plugin-enabled": "Yes"
cloudbreak_1 | }
cloudbreak_1 | }
cloudbreak_1 |
cloudbreak_1 | }; line: 8, column: 6] obviously I tried to explicitly disable ranger plugin for atlas or add atlas.authorizer.impl to my blueprint, but nothing helps. I think it's general bug how some properties are being added with extra comma.
... View more
Labels: