Member since
10-23-2017
30
Posts
5
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1822 | 05-31-2019 03:57 PM | |
1388 | 01-31-2018 04:13 PM |
07-21-2019
11:35 PM
Hi i shared toad for hadoop at http://acnalert.eastus.cloudapp.azure.com/world/ i recommend to use the version 1.3 since 1.5 has a limit of licence
... View more
07-21-2019
11:33 PM
Hi i shared it at http://acnalert.eastus.cloudapp.azure.com/world/ i recommend to use the version 1.3 since 1.5 has a limit of licence
... View more
05-31-2019
03:57 PM
the fix is to set this: `set hive.support.sql11.reserved.keywords=true;`
... View more
05-31-2019
02:41 PM
Hi When i do a SELECT null; at cli hive i get a good answer -> NULL when i do this in beeline , i get Error: Error while compiling statement: FAILED: SemanticException [Error 10004]: Line 1:7 Invalid table alias or column reference 'null': (possible column names are: ) (state=42000,code=10004) this is working on one of my clusters but not in one of them (hdp 2.6.1 Connected to: Apache Hive (version 2.1.0.2.6.1.0-129) Driver: Hive JDBC (version 1.2.1000.2.6.1.0-129) Transaction isolation: TRANSACTION_REPEATABLE_READ Beeline version 1.2.1000.2.6.1.0-129 by Apache Hive ) i need this to work, since when i query INSERT OVERWRITE TABLE schema.customer SELECT CUST_ACCOUNTS otherschema.CUST_ACCOUNT_ID, NULL AS ACCOUNT FROM xxxx ; i really appreciate any help
... View more
Labels:
- Labels:
-
Apache Hive
02-25-2019
07:28 PM
3 Kudos
to download a csv curl --header "application/xml" -H "Content-Type: text/x-csv" -o Ranger_Policies.csv -u user:pass "http://rangerAdminServer:6080/service/plugins/policies/csv?&serviceType="
... View more
07-17-2018
07:35 PM
hbase-protocol-1.1.2.2.3.2.0-2950.jar
... View more
06-22-2018
02:07 AM
I have follow what @Michael Bronson indicated, the folder /var/lib/ambari-agent/data/datanode was not existent. then i created the dfs_data_dir_mount.hist file and copied the content from another server in the cluster the issue still not solved.
... View more
05-23-2018
06:19 PM
you need to ask your helpdesk ,to assist you on the disk space. or yu can google it. if you dont have linux installed, then you need to install it .
... View more
05-23-2018
06:13 PM
HDD = HArdisk HDP= Hortonworks data platform (Hadoop ecosystem distro) do you have installed Ubuntu o Centos?. that is the first step
... View more
05-16-2018
05:14 PM
in my case i get many errors of packages "Cannot match package for regexp name datafu_${stack_version}." then y run a yum reinstall package
... View more
05-16-2018
02:26 PM
dont understand why this is needed , but , it did it. thanks
... View more
05-16-2018
01:57 PM
i vavedownloaded the dependencie from http://rpmfind.net/linux/rpm2html/search.php?query=libtirpc+%3D+0.2.4-0.10.el7&submit=Search+...&system=&arch
... View more
05-16-2018
01:52 PM
enabling subscription-manager repos and downloading the rpm . still have dependencies issues
[root@phcv-dlhadoop01 ~]# yum install libtirpc-devel-0.2.4-0.10.el7.x86_64.rpm -y --skip-broken
Loaded plugins: product-id, rhnplugin, search-disabled-repos, subscription-manager This system is receiving updates from RHN Classic or Red Hat Satellite. Examining libtirpc-devel-0.2.4-0.10.el7.x86_64.rpm: libtirpc-devel-0.2.4-0.10.el7.x86_64 Marking libtirpc-devel-0.2.4-0.10.el7.x86_64.rpm to be installed Resolving Dependencies --> Running transaction check ---> Package libtirpc-devel.x86_64 0:0.2.4-0.10.el7 will be installed --> Processing Dependency: libtirpc = 0.2.4-0.10.el7 for package: libtirpc-devel-0.2.4-0.10.el7.x86_64 Packages skipped because of dependency problems: libtirpc-devel-0.2.4-0.10.el7.x86_64 from /libtirpc-devel-0.2.4-0.10.el7.x86_64
... View more
05-15-2018
09:07 PM
Thanks!!!!!!! wget -nv https://rpmfind.net/linux/centos/7.4.1708/os/x86_64/Packages/libtirpc-devel-0.2.4-0.10.el7.x86_64.rpm yum install libtirpc-devel-0.2.4-0.10.el7.x86_64.rpm -y --skip-broken
... View more
05-15-2018
09:06 PM
Thanks!!!!!!! wget -nv https://rpmfind.net/linux/centos/7.4.1708/os/x86_64/Packages/libtirpc-devel-0.2.4-0.10.el7.x86_64.rpm yum install libtirpc-devel-0.2.4-0.10.el7.x86_64.rpm -y --skip-broken
... View more
05-14-2018
07:28 PM
Hi. Im deploying hdp 2..6.1 in rhel 7 i have installed ambari. and when deploying the cluster, i get the following error , at the first step of the deploy ---- Traceback
(most recent call last): File
"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py",
line 35, in <module> BeforeAnyHook().execute() File
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
line 375, in execute method(env) File
"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py",
line 29, in hook setup_users() File
"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py",
line 51, in setup_users fetch_nonlocal_groups =
params.fetch_nonlocal_groups, File "/usr/lib/python2.6/site-packages/resource_management/core/base.py",
line 166, in __init__ self.env.run() File
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
line 160, in run self.run_action(resource, action) File
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
line 124, in run_action provider_action() File
"/usr/lib/python2.6/site-packages/resource_management/core/providers/accounts.py",
line 84, in action_create shell.checked_call(command, sudo=True) File
"/usr/lib/python2.6/site-packages/resource_management/core/shell.py",
line 72, in inner result = function(command, **kwargs) File
"/usr/lib/python2.6/site-packages/resource_management/core/shell.py",
line 102, in checked_call tries=tries, try_sleep=try_sleep,
timeout_kill_strategy=timeout_kill_strategy) File
"/usr/lib/python2.6/site-packages/resource_management/core/shell.py",
line 150, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py",
line 303, in _call raise ExecutionFailed(err_msg, code, out,
err) resource_management.core.exceptions.ExecutionFailed:
Execution of 'usermod -G hdfs -g hadoop hdfs' returned 6. usermod: user 'hdfs'
does not exist in /etc/passwd Error:
Error: Unable to run the custom hook script ['/usr/bin/python',
'/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py',
'ANY', '/var/lib/ambari-agent/data/command-3626.json', '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY',
'/var/lib/ambari-agent/data/structured-out-3626.json', 'INFO',
'/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1', '']Traceback (most recent call
last): File
"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/hook.py",
line 37, in <module> BeforeInstallHook().execute() File
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
line 382, in execute self.save_component_version_to_structured_out(self.command_name) File
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
line 244, in save_component_version_to_structured_out stack_select_package_name =
stack_select.get_package_name() File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/stack_select.py",
line 110, in get_package_name package =
get_packages(PACKAGE_SCOPE_STACK_SELECT, service_name, component_name) File
"/usr/lib/python2.6/site-packages/resource_management/libraries/functions/stack_select.py",
line 224, in get_packages supported_packages =
get_supported_packages() File
"/usr/lib/python2.6/site-packages/resource_management/libraries/functions/stack_select.py",
line 148, in get_supported_packages raise Fail("Unable to query for
supported packages using {0}".format(stack_selector_path)) resource_management.core.exceptions.Fail:
Unable to query for supported packages using /usr/bin/hdp-select stdout: /var/lib/ambari-agent/data/output-3626.txt 2018-05-14
14:54:16,835 - Stack Feature Version Info: Cluster Stack=2.6, Command
Stack=None, Command Version=None -> 2.6 2018-05-14
14:54:16,841 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2018-05-14
14:54:16,843 - Group['livy'] {} 2018-05-14
14:54:16,844 - Group['spark'] {} 2018-05-14
14:54:16,844 - Group['hdfs'] {} 2018-05-14
14:54:16,844 - Group['hadoop'] {} 2018-05-14
14:54:16,844 - Group['users'] {} 2018-05-14
14:54:16,845 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': [u'hadoop'], 'uid': None} 2018-05-14
14:54:16,853 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups':
True, 'groups': [u'hadoop'], 'uid': None} 2018-05-14
14:54:16,860 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups':
[u'users'], 'uid': None} 2018-05-14
14:54:16,866 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': [u'hadoop'], 'uid': None} 2018-05-14
14:54:16,872 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': [u'users'], 'uid': None} 2018-05-14
14:54:16,879 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': [u'hadoop'], 'uid': None} 2018-05-14
14:54:16,885 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': [u'hadoop'], 'uid': None} 2018-05-14
14:54:16,891 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups':
True, 'groups': [u'users'], 'uid': None} 2018-05-14
14:54:16,898 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['hdfs'], 'uid': None} 2018-05-14
14:54:16,904 - Modifying user hdfs Error:
Error: Unable to run the custom hook script ['/usr/bin/python',
'/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py',
'ANY', '/var/lib/ambari-agent/data/command-3626.json',
'/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY',
'/var/lib/ambari-agent/data/structured-out-3626.json', 'INFO',
'/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1', ''] Command
failed after 1 --------------------------------------------- all the users exist but not the hdfs user. eny ideas? thanks
... View more
Labels:
- Labels:
-
Apache Ambari
02-06-2018
03:28 PM
There is a way to do the upgrade, and keeping the kafka-logs data? @hgalante
... View more
02-05-2018
04:52 PM
1 Kudo
First backup some metadata related to your topics su - kafka
>sript>
for i in
`/usr/hdp/current/kafka-broker/bin/kafka-consumer-groups.sh --zookeeper server-02:2181 --list --security-protocol
PLAINTEXTSASL 2>&1 `
do
echo $i
/usr/hdp/current/kafka-broker/bin/kafka-consumer-groups.sh --zookeeper server-01:2181 --group $i --describe
done
<script<
describe topics /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --describe --zookeeper sr-hadctl-xt01:2181 --topic bptopic
now remove the brokers metadata fron zookeeper Delete
topic (from zookeeper)
/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server `hostname -f`:2181
rmr /brokers Create Kafka
tópic (at Broker server)
su -
kafka
cd
/usr/hdp/current/kafka-broker/bin
./kafka-topics.sh
--create --zookeeper server-2:2181 --partitions 1 --replication-factor
1 --topic bptopic
Altering retention time OPTION 1
[root@server-02 ~]#
/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --zookeeper server-01:2181 --alter --topic bptopic
--config retention.ms=86400000
WARNING:
Altering topic configuration from this script has been deprecated and may be
removed in future releases.
Going forward, please use
kafka-configs.sh for this functionality
Updated
config for topic "bptopic".
OPTION 2
[root@server-02 ~]# /usr/hdp/current/kafka-broker/bin/kafka-configs.sh
--zookeeper sr-hadctl-xt01:2181
--alter --entity-type topics --entity-name bptopic --add-config
'retention.ms=864000
Check config
[root@server-02 ~]# /usr/hdp/current/kafka-broker/bin/kafka-topics.sh
--describe --zookeeper sr-hadctl-xt01:2181 --topic bptopic
Topic:bptopic PartitionCount:1 ReplicationFactor:1 Configs:retention.ms=86400000
Topic: bptopic Partition: 0 Leader: 1001 Replicas: 1001 Isr: 1001
... View more
01-31-2018
04:13 PM
seem to be that deleting the brokers at zookeper do the trick . yet all the topic need to be recreated, and the data of the topics will be lost https://issues.apache.org/jira/browse/KAFKA-4987
... View more
01-30-2018
06:11 PM
HI, ive upgrades HDP from 2.4 to 2.5 to 2.6. now kafka is at 0.10.1 . im have Kerberos configured and working with Ranger I can create , produce , and consuming with zookeeper ( usinf kinit and a user that have permissions at Ranger rule) WORKING OK
./kafka-console-producer.sh --broker-list ctl-t01.my.domain:6667,ctl-t02.my.domain:6667,ctl-t01.my.domain:6667 --topic bptopic --security-protocol PLAINTEXTSASL
./kafka-console-consumer.sh --bootstrap-server ctl-t01.my.domain:2181:6667 --topic bptopic --security-protocol PLAINTEXTSASL --from-beginning
when i use the Brokers instead of zookeper , the consumer , is not consuming NOT WORKING
./kafka-console-consumer.sh --zookeeper ctl-t01.my.domain:2181 --topic bptopic--security-protocol PLAINTEXTSASL --from-beginning i have cheked logs , and documentation , without any solution. Could be , that i need to delete at zookepper rmr /brokers? ( im not sure, but in another environment , that was the solution. deleting all the topics..). i want to see if i can keep all the topics , since we keep them for more than a week, and deleting them will be the last resource. thanks much @Ian Roberts
... View more
Labels:
01-30-2018
03:49 PM
@Michael Bronson , i changed the script, since , wasnt parsinf the consumer ( btw . grear script - thanks) topico="entrada" for i in `/usr/hdp/current/kafka-broker/bin/zookeeper-shell.sh sr-hadctl-xt01:2181 ls /consumers 2>&1 | grep consumer | cut -d "[" -f2 | cut -d "]" -f1 | tr ',' "\n"` do /usr/hdp/current/kafka-broker/bin/zookeeper-shell.sh sr-hadctl-xt01:2181 ls /consumers/$i/offsets 2>&1 | grep $topico if [ $? == 0 ] then echo $i fi done
... View more
01-29-2018
09:02 PM
Remember If you have KAFKA : you need to change at config -> kafka brokers -> listeners back to PLAINTEXT://localhost:6667 (from PLAINTEXTSASL://localhost:6667)
... View more
12-19-2017
05:19 PM
This how i ve solved it delete ambari.repo yum repolist wget -nv http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.4.3.0/ambari.repo -O /etc/yum.repos.d/ambari.repo yum install ambari
... View more
11-22-2017
03:51 PM
When disabling Kerberos at Ambari .some configuration is misconfigures, and restarting Ambari server, you get this error 22 Nov 2017 12:30:53,837 INFO [main] KerberosChecker:64 - Checking Ambari Server Kerberos credentials.
22 Nov 2017 12:30:53,858 ERROR [main] KerberosChecker:120 - xxxxxxx.xxxxxxxx.com.ar: unknown error
22 Nov 2017 12:30:53,860 ERROR [main] AmbariServer:1073 - Failed to run the Ambari Server
org.apache.ambari.server.AmbariException: Ambari Server Kerberos credentials check failed.
Check KDC availability and JAAS configuration in /etc/ambari-server/conf/krb5JAASLogin.conf
at org.apache.ambari.server.controller.utilities.KerberosChecker.checkJaasConfiguration(KerberosChecker.java:121)
at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:1064) the same worker for me vi /etc/ambari-server/conf/ambari.properties set the following authentication.kerberos.enabled=false kerberos.check.jaas.configuration=false
... View more
11-10-2017
05:28 PM
Hi. where did you changed this config? thanks
... View more
11-10-2017
05:07 PM
Hi. the root@REALM needs to be created in AD? or elseware apreciate to have ea little more detailed instructions. thanks very much :-).
... View more
11-07-2017
03:20 PM
Thanks for the lead 🙂
... View more
11-01-2017
09:05 PM
HI,
I have upgraded from Ambari and HDP 2.5 to 2.6.3.
After installing Atlas, i get a warning , on the webUI , and the Web UI is not accesible.
at /var/log/atlas/application.log i can see that each time i request the url for the Atlas Dashboard at
http://xxxxxxxxxxx.com:21000/login.jsp i get an error 500 Thanks in advance 2017-11-01 16:23:08,686 WARN - [pool-2-thread-7:] ~ /login.jsp (AbstractHttpConnection:552)
java.lang.NoClassDefFoundError: Could not initialize class org.apache.jasper.compiler.ErrorDispatcher
at org.apache.jasper.compiler.Compiler.<init>(Compiler.java:139)
at org.apache.jasper.JspCompilationContext.createCompiler(JspCompilationContext.java:288)
at org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:622)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:374)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:492)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:378)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:501)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:575)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:427)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:276)
at org.eclipse.jetty.server.Dispatcher.error(Dispatcher.java:112)
at org.eclipse.jetty.server.handler.ErrorHandler.handle(ErrorHandler.java:86)
at org.eclipse.jetty.server.Response.sendError(Response.java:349)
at org.eclipse.jetty.server.Response.sendError(Response.java:430)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:598)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:427)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:370)
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)
at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:973)
at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1035)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:641)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:231)
at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:696)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:53)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745) application.properties file # Generated by Apache Ambari. Tue Oct 31 19:16:18 2017
atlas.audit.hbase.tablename=ATLAS_ENTITY_AUDIT_EVENTS
atlas.audit.hbase.zookeeper.quorum=xxxxxxxxxxxxxxxxxxxxxxxxx.com,xxxxxxxxxxxxxxxxxxxxxxxxx.com,xxxxxxxxxxxxxxxxxxxxxxxxx.com
atlas.audit.zookeeper.session.timeout.ms=60000
atlas.auth.policy.file=/usr/hdp/current/atlas-server/conf/policy-store.txt
atlas.authentication.keytab=/etc/security/keytabs/atlas.service.keytab
atlas.authentication.method.file=true
atlas.authentication.method.file.filename=/usr/hdp/current/atlas-server/conf/users-credentials.properties
atlas.authentication.method.kerberos=false
atlas.authentication.method.ldap=false
atlas.authentication.method.ldap.ad.base.dn=
atlas.authentication.method.ldap.ad.bind.dn=
atlas.authentication.method.ldap.ad.bind.password=
atlas.authentication.method.ldap.ad.default.role=ROLE_USER
atlas.authentication.method.ldap.ad.domain=
atlas.authentication.method.ldap.ad.referral=ignore
atlas.authentication.method.ldap.ad.url=
atlas.authentication.method.ldap.ad.user.searchfilter=(sAMAccountName={0})
atlas.authentication.method.ldap.base.dn=
atlas.authentication.method.ldap.bind.dn=
atlas.authentication.method.ldap.bind.password=
atlas.authentication.method.ldap.default.role=ROLE_USER
atlas.authentication.method.ldap.groupRoleAttribute=cn
atlas.authentication.method.ldap.groupSearchBase=
atlas.authentication.method.ldap.groupSearchFilter=
atlas.authentication.method.ldap.referral=ignore
atlas.authentication.method.ldap.type=ldap
atlas.authentication.method.ldap.url=
atlas.authentication.method.ldap.user.searchfilter=
atlas.authentication.method.ldap.userDNpattern=uid=
atlas.authentication.principal=atlas
atlas.authorizer.impl=simple
atlas.cluster.name=HADOOP_DESARROLLO
atlas.enableTLS=false
atlas.graph.index.search.backend=solr5
atlas.graph.index.search.solr.mode=cloud
atlas.graph.index.search.solr.zookeeper-url=xxxxxxxxxxxxxxx.com:2181/infra-solr,xxxxxxxxxxxxxxx.com:2181/infra-solr,xxxxxxxxxxxxxxxxxxxxxxx.com:2181/infra-solr
atlas.graph.storage.backend=hbase
atlas.graph.storage.hbase.table=atlas_titan
atlas.graph.storage.hostname=xxxxxxxxxxxxxxxxxxxxxxxxx.com,xxxxxxxxxxxxxxxxxxxxxxxxx.com,xxxxxxxxxxxxxxxxxxxxxxxxx.com
atlas.kafka.bootstrap.servers=xxxxxxxxxxxxxx.com:6667,xxxxxxxxxxxxxx.com:6667,xxxxxxxxxxxxxx.com:6667,xxxxxxxxxxxxxxxxxxxxxxxxx.com:6667
atlas.kafka.enable.auto.commit=false
atlas.kafka.hook.group.id=atlas
atlas.kafka.session.timeout.ms=30000
atlas.kafka.zookeeper.connect=xxxxxxxxxxxxxxxxxxxxxxxxx.com:2181,xxxxxxxxxxxxxxxxxxxxxxxxx.com:2181,xxxxxxxxxxxxxxxxxxxxxxxxx.com:2181
atlas.kafka.zookeeper.connection.timeout.ms=30000
atlas.kafka.zookeeper.session.timeout.ms=60000
atlas.kafka.zookeeper.sync.time.ms=20
atlas.lineage.schema.query.hive_table=hive_table where __guid='%s'\, columns
atlas.lineage.schema.query.Table=Table where __guid='%s'\, columns
atlas.notification.create.topics=true
atlas.notification.embedded=false
atlas.notification.replicas=1
atlas.notification.topics=ATLAS_HOOK,ATLAS_ENTITIES
atlas.proxyusers=knox
atlas.rest.address=http://xxxxxxxxxxxxxxxxxxxxxxxxx.com:21000
atlas.server.address.id1=xxxxxxxxxxxxxxxxxxxxxxxxx.com:21000
atlas.server.bind.address=xxxxxxxxxxxxxxxxxxxxxxxxx.com
atlas.server.ha.enabled=false
atlas.server.http.port=21000
atlas.server.https.port=21443
atlas.server.ids=id1
atlas.solr.kerberos.enable=false
atlas.ssl.exclude.protocols=TLSv1.2
atlas.sso.knox.browser.useragent=
atlas.sso.knox.enabled=false
atlas.sso.knox.providerurl=https://xxxxxxxxxxxxxxxxxxxxxxxxx.com:8443/gateway/knoxsso/api/v1/websso
atlas.sso.knox.publicKey=
Alert :<br>HTTP 502 response from http://localhost:21000/api/atlas/admin/status in 0.003s (HTTP Error 502: Proxy Error ( Connection refused ))<img alt="">
... View more
Labels:
- Labels:
-
Apache Atlas
-
Apache Hadoop