Member since
03-24-2016
91
Posts
7
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1708 | 10-06-2017 04:29 AM | |
2163 | 09-21-2017 05:37 AM | |
2038 | 08-11-2017 02:34 AM | |
1887 | 04-07-2016 03:17 AM | |
3307 | 03-29-2016 06:08 AM |
05-30-2019
07:08 AM
I just want to know which option to turning can reduce bandwidth usage
... View more
05-28-2019
01:27 AM
I use kafka. the server has 1G ethernet recent i found that then lan bandwidth of machines is too high, i use some tools to monitor the traffic and foud that too many trasfer usage of kafka. total bandwidth frequently reach 400mbps, and ping of other node can from 0.1 ms suddenly to 30ms.... So is there some way to turning the bandwidth usage??
... View more
Labels:
- Labels:
-
Apache Kafka
03-06-2019
06:04 AM
@sneethiraj I notice that in ranger configure, there a user and group sync setting, and could sync users and groups from file,unix users,ldap, as mentioned above ,just unix users and ldap group cna be recognized in ranger or hdfs ???
... View more
04-04-2018
05:39 AM
@Timothy Spann I know how to initialized supeset,and i follow the document for superset, I use mysql to storage superset data. As I post above,it shows error.
... View more
04-03-2018
09:58 AM
@Jay Kumar SenSharma I encounter the same problem. I tried to run: ``` /usr/hdp/2.6.1.0-129/superset/bin/python3.4 /usr/hdp/current/druid-superset/bin/superset init ``` It still shows: /usr/hdp/current/druid-superset/lib/python3.4/importlib/_bootstrap.py:1161: ExtDeprecationWarning: Importing flask.ext.sqlalchemy is deprecated, use flask_sqlalchemy instead.
spec.loader.load_module(spec.name)
/usr/hdp/current/druid-superset/lib/python3.4/importlib/_bootstrap.py:1161: ExtDeprecationWarning: Importing flask.ext.script is deprecated, use flask_script instead.
spec.loader.load_module(spec.name)
Loaded your LOCAL configuration
/usr/hdp/current/druid-superset/lib/python3.4/site-packages/flask_cache/__init__.py:152: UserWarning: Flask-Cache: CACHE_TYPE is set to null, caching is effectively disabled.
warnings.warn("Flask-Cache: CACHE_TYPE is set to null, "
/usr/hdp/current/druid-superset/lib/python3.4/importlib/_bootstrap.py:1161: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
spec.loader.load_module(spec.name)
2018-04-03 17:56:04,163:INFO:flask_appbuilder.base:Registering class MyIndexView on menu
2018-04-03 17:56:04,169:INFO:flask_appbuilder.base:Registering class UtilView on menu
2018-04-03 17:56:04,173:INFO:flask_appbuilder.base:Registering class LocaleView on menu
2018-04-03 17:56:04,178:INFO:flask_appbuilder.base:Registering class ResetPasswordView on menu
2018-04-03 17:56:04,203:INFO:flask_appbuilder.base:Registering class ResetMyPasswordView on menu
2018-04-03 17:56:04,226:INFO:flask_appbuilder.base:Registering class UserInfoEditView on menu
2018-04-03 17:56:04,250:INFO:flask_appbuilder.base:Registering class AuthDBView on menu
2018-04-03 17:56:04,260:INFO:flask_appbuilder.base:Registering class UserDBModelView on menu List Users
2018-04-03 17:56:04,342:INFO:flask_appbuilder.base:Registering class RoleModelView on menu List Roles
2018-04-03 17:56:04,424:INFO:flask_appbuilder.base:Registering class UserStatsChartView on menu User's Statistics
2018-04-03 17:56:04,485:INFO:flask_appbuilder.base:Registering class PermissionModelView on menu Base Permissions
2018-04-03 17:56:04,554:INFO:flask_appbuilder.base:Registering class ViewMenuModelView on menu Views/Menus
2018-04-03 17:56:04,624:INFO:flask_appbuilder.base:Registering class PermissionViewModelView on menu Permission on Views/Menus
2018-04-03 17:56:05,666:INFO:flask_appbuilder.base:Registering class TableColumnInlineView on menu
2018-04-03 17:56:05,708:INFO:flask_appbuilder.base:Registering class DruidColumnInlineView on menu
2018-04-03 17:56:05,748:INFO:flask_appbuilder.base:Registering class SqlMetricInlineView on menu
2018-04-03 17:56:05,785:INFO:flask_appbuilder.base:Registering class DruidMetricInlineView on menu
2018-04-03 17:56:05,842:WARNING:flask_appbuilder.models.filters:Filter type not supported for column: password
2018-04-03 17:56:05,845:INFO:flask_appbuilder.base:Registering class DatabaseView on menu Databases
2018-04-03 17:56:05,896:WARNING:flask_appbuilder.models.filters:Filter type not supported for column: password
2018-04-03 17:56:05,900:INFO:flask_appbuilder.base:Registering class DatabaseAsync on menu
2018-04-03 17:56:05,935:WARNING:flask_appbuilder.models.filters:Filter type not supported for column: password
2018-04-03 17:56:05,938:INFO:flask_appbuilder.base:Registering class DatabaseTablesAsync on menu
2018-04-03 17:56:06,041:INFO:flask_appbuilder.base:Registering class TableModelView on menu Tables
2018-04-03 17:56:06,094:INFO:flask_appbuilder.base:Registering class AccessRequestsModelView on menu Access requests
2018-04-03 17:56:06,147:INFO:flask_appbuilder.base:Registering class DruidClusterModelView on menu Druid Clusters
2018-04-03 17:56:06,201:INFO:flask_appbuilder.base:Registering class SliceModelView on menu Slices
2018-04-03 17:56:06,246:INFO:flask_appbuilder.base:Registering class SliceAsync on menu
2018-04-03 17:56:06,283:INFO:flask_appbuilder.base:Registering class SliceAddView on menu
2018-04-03 17:56:06,321:INFO:flask_appbuilder.base:Registering class DashboardModelView on menu Dashboards
2018-04-03 17:56:06,368:INFO:flask_appbuilder.base:Registering class DashboardModelViewAsync on menu
2018-04-03 17:56:06,406:INFO:flask_appbuilder.base:Registering class LogModelView on menu Action Log
2018-04-03 17:56:06,460:INFO:flask_appbuilder.base:Registering class QueryView on menu Queries
2018-04-03 17:56:06,511:INFO:flask_appbuilder.base:Registering class DruidDatasourceModelView on menu Druid Datasources
2018-04-03 17:56:06,562:INFO:flask_appbuilder.base:Registering class R on menu
2018-04-03 17:56:06,568:INFO:flask_appbuilder.base:Registering class Superset on menu
2018-04-03 17:56:06,667:INFO:flask_appbuilder.base:Registering class CssTemplateModelView on menu CSS Templates
2018-04-03 17:56:06,719:INFO:flask_appbuilder.base:Registering class CssTemplateAsyncModelView on menu
2018-04-03 17:56:06,837:INFO:root:Syncing role definition
2018-04-03 17:56:06,848:INFO:root:Creating database reference
2018-04-03 17:56:06,854:INFO:root:mysql+pymysql://superset:superset@192.168.112.48:3306/superset
Traceback (most recent call last):
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1127, in _execute_context
context = constructor(dialect, self, conn, *args)
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/engine/default.py", line 693, in _init_compiled
for key in compiled_params
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/engine/default.py", line 693, in <genexpr>
for key in compiled_params
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/sql/type_api.py", line 1156, in process
return impl_processor(process_param(value, dialect))
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy_utils/types/encrypted.py", line 237, in process_bind_param
self._update_key()
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy_utils/types/encrypted.py", line 232, in _update_key
self.engine._update_key(key)
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy_utils/types/encrypted.py", line 35, in _update_key
digest.update(key)
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/cryptography/hazmat/primitives/hashes.py", line 92, in update
raise TypeError("data must be bytes.")
TypeError: data must be bytes.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/hdp/current/druid-superset/bin/superset", line 85, in <module>
manager.run()
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/flask_script/__init__.py", line 412, in run
result = self.handle(sys.argv[0], sys.argv[1:])
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/flask_script/__init__.py", line 383, in handle
res = handle(*args, **config)
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/flask_script/commands.py", line 216, in __call__
return self.run(*args, **kwargs)
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/superset/cli.py", line 27, in init
security.sync_role_definitions()
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/superset/security.py", line 122, in sync_role_definitions
get_or_create_main_db()
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/superset/security.py", line 106, in get_or_create_main_db
db.session.commit()
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/orm/scoping.py", line 153, in do
return getattr(self.registry(), name)(*args, **kwargs)
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/orm/session.py", line 943, in commit
self.transaction.commit()
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/orm/session.py", line 467, in commit
self._prepare_impl()
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/orm/session.py", line 447, in _prepare_impl
self.session.flush()
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/orm/session.py", line 2254, in flush
self._flush(objects)
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/orm/session.py", line 2380, in _flush
transaction.rollback(_capture_exception=True)
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/util/langhelpers.py", line 66, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/util/compat.py", line 187, in reraise
raise value
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/orm/session.py", line 2344, in _flush
flush_context.execute()
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/orm/unitofwork.py", line 391, in execute
rec.execute(self)
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/orm/unitofwork.py", line 556, in execute
uow
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/orm/persistence.py", line 181, in save_obj
mapper, table, insert)
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/orm/persistence.py", line 866, in _emit_insert_statements
execute(statement, params)
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 948, in execute
return meth(self, multiparams, params)
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/sql/elements.py", line 269, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement
compiled_sql, distilled_params
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1132, in _execute_context
None, None)
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1413, in _handle_dbapi_exception
exc_info
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/util/compat.py", line 186, in reraise
raise value.with_traceback(tb)
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1127, in _execute_context
context = constructor(dialect, self, conn, *args)
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/engine/default.py", line 693, in _init_compiled
for key in compiled_params
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/engine/default.py", line 693, in <genexpr>
for key in compiled_params
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy/sql/type_api.py", line 1156, in process
return impl_processor(process_param(value, dialect))
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy_utils/types/encrypted.py", line 237, in process_bind_param
self._update_key()
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy_utils/types/encrypted.py", line 232, in _update_key
self.engine._update_key(key)
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/sqlalchemy_utils/types/encrypted.py", line 35, in _update_key
digest.update(key)
File "/usr/hdp/current/druid-superset/lib/python3.4/site-packages/cryptography/hazmat/primitives/hashes.py", line 92, in update
raise TypeError("data must be bytes.")
sqlalchemy.exc.StatementError: (builtins.TypeError) data must be bytes. [SQL: 'INSERT INTO dbs (created_on, changed_on, database_name, sqlalchemy_uri, password, cache_timeout, select_as_create_table_as, expose_in_sqllab, allow_run_sync, allow_run_async, allow_ctas, allow_dml, force_ctas_schema, extra, perm, changed_by_fk, created_by_fk) VALUES (%(created_on)s, %(changed_on)s, %(database_name)s, %(sqlalchemy_uri)s, %(password)s, %(cache_timeout)s, %(select_as_create_table_as)s, %(expose_in_sqllab)s, %(allow_run_sync)s, %(allow_run_async)s, %(allow_ctas)s, %(allow_dml)s, %(force_ctas_schema)s, %(extra)s, %(perm)s, %(changed_by_fk)s, %(created_by_fk)s)'] [parameters: [{'sqlalchemy_uri': 'mysql+pymysql://superset:XXXXXXXXXX@192.168.112.48:3306/superset', 'cache_timeout': None, 'password': 'superset', 'force_ctas_schema': None, 'allow_run_sync': True, 'expose_in_sqllab': True, 'database_name': 'main', 'perm': '[main].(id:None)'}]]
So I think there some bug when pyhton3 working with pymysql asn sqlalchemy
... View more
10-12-2017
01:32 AM
@Aravindan Vijayan
I just found some simple java metric on grafana,only GC TIME / GC TIME PARNEW But I want to know detail of java metrics such datanode heapsize usage,yong gc count and time which ambari metrics collected Grafana shows me "This dashboard is managed by Ambari. You may lose any changes made to this dashboard. If you want to customize, make your own copy." So I don't know how to let them show on grafana automatic
... View more
10-11-2017
08:27 AM
I use ambari 2.5.1.0. and hdp 2.4 I open grafana data and not found datanode jvm metrics on the dashboard, Why there has no these metrics? and is there a way to auto add them?
... View more
Labels:
- Labels:
-
Apache Ambari
10-08-2017
11:43 AM
@Aditya Sirna But my openjdk version is 1.8, So I think maybe the post is too old...
... View more
10-06-2017
04:29 AM
@Aravindan Vijayan I change the file '/var/lib/ambari-server/resources/stacks/HDP/2.0.6/hooks/before-START/templates/hadoop-metrics2.properties.j2' ,and restart ambari server and agent ,but there is no use. But I change the content of /etc/hadoop/conf/hadoop-metrics2.properties and restart ambari server and agent ,it goes ok.
... View more
10-06-2017
04:06 AM
@Aravindan Vijayan Let me try. And however,there are some table not found error: Internal Exception: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'ambari.ds_jobimpl_6' doesn't exist
... View more
10-01-2017
03:47 AM
I upgrade ambari 2.2 to 2.5.1 recently. after upgrade,I found that,some alert status is UNKNOW. open an alert definition, log shows : Properties file doesn't contain namenode.sink.timeline.collector.hosts. Can't define metric collector hosts but when I open the service tap and the metrics shows the data: So I don't why. And I view the configure file:/etc/hadoop/2.4.0.0-169/0/hadoop-metrics2.properties it show : datanode.sink.timeline.collector=node4.hadoop
namenode.sink.timeline.collector=node4.hadoop
resourcemanager.sink.timeline.collector=node4.hadoop
nodemanager.sink.timeline.collector=node4.hadoop
jobhistoryserver.sink.timeline.collector=node4.hadoop
journalnode.sink.timeline.collector=node4.hadoop
applicationhistoryserver.sink.timeline.collector=node4.hadoop
... View more
Labels:
- Labels:
-
Apache Ambari
09-25-2017
10:25 AM
One day ,my hdfs namenode accidentally shutdown. When I analyze the log,found that 2017-09-15T08:26:55.262+0800: 1859843.345: [GC (Allocation Failure) 2017-09-15T08:26:55.262+0800: 1859843.345: [ParNew (promotion failed): 108213K- >118016K(118016K), 0.1060261 secs]2017-09-15T08:26:55.368+0800: 1859843.451: [CMS: 21025420K->3714406K(33423360K), 119.2980533 secs] 21133633K- >3714406K(33541376K), [Metaspace: 55012K->55012K(57344K)], 119.4048541 secs] [Times: user=17.19 sys=2.45, real=119.39 secs] So I think the full gc lead the accident. java version on my server is openjdk 1.8. But is there a guid way to turning the jvm option??
... View more
- Tags:
- Hadoop Core
- HDFS
- JAVA8
Labels:
- Labels:
-
Apache Hadoop
09-21-2017
05:37 AM
What I don't clear is why on one server show me 'NoneType' object has no attribute 'modules' ,but the other is ok I try to restart the ambari-agent,It show me ok So terrible. resloved.
... View more
09-21-2017
03:53 AM
I add the script call time to label, and I found there no time return.I will post the script soon
... View more
09-21-2017
01:38 AM
@Jay SenSharma I know how to deploy a custom monitor and how to start it. But it runs ok on one server but failed on the other one. I had change the code to simple logic: such as: def execute():
try:
return 'OK', 'no errors'
exception:
return 'UNKNOW', 'UNKNOW data'
it still shows me : 'NoneType' object has no attribute 'modules'
And I surely that I can run the monitor.py manaual correctly.
... View more
09-20-2017
12:30 PM
I uset hdp 2.4 and ambari 2.2.1.1 I had custom an ambari alert and test on my test servers. But when I copy it on mysql product environment, one server seems every thing is ok,and the other is show me : 'NoneType' object has no attribute 'modules' So, I try to the monitor script writen in pytthon. When I run python monitor.py, it shows ok,but on the ambari-agent,it still wrong. So is there a way to test the script and debug the monitor script??
... View more
Labels:
- Labels:
-
Apache Ambari
08-25-2017
03:22 AM
Sadly , I upgrade the agent recently from version 2.5.0.3 to 2.5.1.0 this Issue is still there
... View more
08-11-2017
02:34 AM
I found that : I can refresh flume configs through "HOSTS --> select the host which flume configs need to fresh --> select refresh configure from drop down menu " to refresh configure file and not restart the flume agent. But the problem is on the service and host dash board,it will show you "Host needs 1 component restarted"...
... View more
08-08-2017
09:40 AM
@Jay SenSharma The url you post is how to create an alert. But I also want to how to get the flume metric easily through python. I had read the api of ambari and python-ambariclint,but few content of how to get flume metric data.
... View more
08-08-2017
03:31 AM
I use ambari 2.x and hdp I use flume,the metric has the flume channel size metric data,but not alerts about it,so I want to add an alert to it ,but I could not found the alert of it. So, Is there a easy way to add an alert from mertic data??
... View more
Labels:
- Labels:
-
Apache Ambari
07-19-2017
03:51 AM
I know what you mean,but my sa only support the server with RAID5 disks, so ...
... View more
07-18-2017
02:26 AM
@rbiswas I use the script to calc how to set the mem setting: hdp-configuration-utils.py from github https://github.com/hortonworks/hdp-configuration-utils so a param is -d disk number. My server had raided disks,so I don't how to determine how to set the disk number. Set to 1 means the number of raid number,set 12 ,means the hard disk number.
... View more
07-17-2017
07:26 AM
I use hdp and use the script from : https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.1/bk_command-line-installation/content/determine-hdp-memory-config.html But my server user raid5 ,so how to set the number of the disk?
... View more
- Tags:
- Hadoop Core
- YARN
Labels:
- Labels:
-
Apache YARN
07-12-2017
07:01 AM
@Jay SenSharma It sees strange,I found the version of mysql-connector is mysql-connector-java-5.1.17-6.el6.noarch on mysql another cluster,but there is nothing wrong about it.
... View more
06-28-2017
05:40 AM
I had check the log of namenode . Because I change the host fof mabari metric collector,So the log shows post error to the old ambari metric collector host.
... View more
06-26-2017
07:39 AM
I use ambari and hdp. I had reinstall ambari metric and restart it. I found after the reinstall, the metric data of NameNode GC count NameNode GC time NN Connection Load NameNode RPC NameNode Operations shows "No Data Available" I had try restart ambari metrics ,but no use to reslove this. And there no errors shows on the ambari WEB UI
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop