Member since
04-11-2016
38
Posts
13
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
54047 | 01-04-2017 11:43 PM | |
4642 | 09-05-2016 04:07 PM | |
12221 | 09-05-2016 03:50 PM | |
3028 | 08-30-2016 08:15 PM | |
4661 | 08-30-2016 01:01 PM |
03-12-2025
05:06 AM
HI ALL , I AM TRYING TO INSTALL HADOOP INTO JUPYTER NOTEBOOK BUT WHEN I OPEN TERMINAL IN THE NOTEBOOK THEN I CANNOT SEE HADOOP CLUSTER TEMRINAL INSTEAD I SEE ONLY LOCAL FILE PATH TERMINALS , I WANT TO SEE HADOOP TERMINAL HERE AND NOT THE LOCAL FILE PATH TERMINAL CAN ANYONE ASSIST ME HERE
... View more
09-17-2021
03:19 AM
I try to install elasticsearch-6.4.2 to my cluster(HDP 3.1 Ambari 2.7.3) , intallation was completed successfully but it could not start, and the error encounterd: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/3.1/services/ELASTICSEARCH/package/scripts/es_master.py", line 168, in <module>
ESMaster().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
method(env)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 1011, in restart
self.start(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.1/services/ELASTICSEARCH/package/scripts/es_master.py", line 153, in start
self.configure(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.1/services/ELASTICSEARCH/package/scripts/es_master.py", line 86, in configure
group=params.es_group
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 123, in action_create
content = self._get_content()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 160, in _get_content
return content()
File "/usr/lib/ambari-agent/lib/resource_management/core/source.py", line 52, in __call__
return self.get_content()
File "/usr/lib/ambari-agent/lib/resource_management/core/source.py", line 144, in get_content
rendered = self.template.render(self.context)
File "/usr/lib/ambari-agent/lib/ambari_jinja2/environment.py", line 891, in render
return self.environment.handle_exception(exc_info, True)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.1/services/ELASTICSEARCH/package/templates/elasticsearch.master.yml.j2", line 93, in top-level template code
action.destructive_requires_name: {{action_destructive_requires_name}}
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/config_dictionary.py", line 73, in __getattr__
raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'hostname' was not found in configurations dictionary! I modified the property of discovery.zen.ping.unicast.hosts from elastic-config.xml and hostname from elasticsearch-env.xml, However, it still could not start and the same error encountered, do you have any idea?
... View more
04-21-2021
03:45 AM
Sorry it's max 8060 characters
... View more
05-26-2020
11:52 PM
@VidyaSargur Thank you for the response and the suggestion, i will create a new thread for my problem.
Edit:
i have created my new question here
Thanks and regards,
Wasif
... View more
03-25-2020
05:31 AM
Is it possible to define a STRUCT element that has an @ sign at the beginning, e.g. "@site" : "Los Angeles" We can live with having the column actually show up as site rather than @site. If we can't do it in the HiveQL syntax then we will have to preprocess the JSON to remove the @ sign, which would be annoying but do-able.
... View more
05-30-2018
12:11 PM
@amit nandi can you provide a step by step instruction on how to install anaconda for HDP ?
... View more
12-26-2018
07:19 AM
@Aditya Sirna Can you please suggest how to remove params. As I tried but unable to save the configuration and restart storm.
... View more
09-27-2017
01:31 PM
1 Kudo
A Machine Learning Model learns from data. As you get new incremental data, the Machine Learning model needs to be upgraded. A Machine Learning Model factory ensures that as you have deployed model in production, continuous learning is also happening on incremental new data ingested in the Production environment. As deployed ML Model's performance decays, a new trained and serialized model needs to be deployed. An A/B test between the deployed model and the newly trained model can score them to evaluate the performance of the deployed model versus the incrementally trained model.
In order to build a Machine Learning Model factory, we have to establish a robust road to production, first. The foundational framework is first to establish three environments: DEV, TEST and PROD. 1- DEV - A development environment where the Data Scientists have their own data puddle in order to perform data exploration, profile the data, develop the machine learning features from the data, build the model, train and test it on the limited subset and then commit to git to transport the code to the next stages. For the purpose of scaling and tuning the learning of the Machine Learning model, we establish a DEV Validation environment, where the model learning is scaled with as much historical data as possible and tuned. 2- TEST - The TEST environment is a pre-production environment where we running the machine learning models through integration tests and readying the move of the Machine Learning model to production in two branches: 2a - model deployment: where the trained serialized Machine Learning model is deployed in the production environment 2b - continuous training: where the Machine Learning model is going through continuous training on incremental data 3- PROD - The Production environment is where live data is ingested. In the production environment a deployment server is hosting the serialized trained model. The deployed model exposes a REST api to deliver predictions on live data queries.
The ML model code is running in production ingesting incremental live data and getting continuously trained.
The deployed model and the continuous training model performances are measured. If the deployed model is showing decay in prediction performance, then it is switched with a newer serialized version of the continuous training model.
The model performance measure can be tracked by closing the loop with the users feedback and tracking True Positive, False Positive, True Negative and False Negative. This choreography of training and deploying machine learning models in production is the heart of the ML model factory. The road to production is depicting the journey of building Machine Learning models within the DEV/TEST/PROD environments.
... View more
Labels:
12-15-2017
08:33 AM
And how to implement this, step how to install? how to install on existing HDP cluster?
... View more
03-14-2019
04:28 PM
sc.version() or spark -submit --version
... View more