Member since
07-31-2013
98
Posts
54
Kudos Received
19
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2959 | 11-23-2016 07:37 AM | |
3054 | 05-18-2015 02:04 PM | |
5231 | 05-13-2015 07:33 AM | |
3995 | 05-12-2015 05:36 AM | |
4318 | 04-06-2015 06:05 AM |
10-27-2013
08:36 AM
1 Kudo
Hey Markovich, Within CM, there are several tunible properties for all the various modules that are not common enough to have options in CM. To handle those options, you can add them as a safety valve. To do this: - Go to Services->MapReduce->Configuration(View and Edit). - Then expand Service-Wide and click on Advanced. - There you should see "MapReduce Service Configuration Safety Valve for mapred-site.xml". Paste the following in there based on the value you want to set for the number of cache directories: <property> <name>mapreduce.tasktracker.local.cache.numberdirectories</name> <value>5000</value> </property> - Then save the config and restart the Mapreduce service. This is true for all the various modules. If you don't find the value when you search, it's probably not settable, but every module will have an Advanced with a "Safety Valve", so you can put your properties in there when necessary. Hope this helps. Thanks Chris
... View more
10-27-2013
08:29 AM
Hey Andrey, Based on the description from the Hue side, you are definitely running into the Hue bug. Jobs would work until the JT fails over and then stop. As for the Mapreduce issue. I'm not actually sure, I'll take a look around. It may be worth while to post that in the MR community as well... Thanks Chris
... View more
10-25-2013
07:02 AM
Hey Markovich, The jobcache by default will max out at 10000 directories. So you should not go above the ~80gb mark there. However, this is configurable and it seems like in your case maybe 5000 directories or even 1000 may be enough. You can set: mapreduce.tasktracker.local.cache.numberdirectories To a lower value and see if that helps. Hope this helps. Thanks Chris
... View more
10-25-2013
06:33 AM
Hey Markovich, I definitely think you are running into that bug then. It's worth noting that it should only fail on existing workflows that point to the wrong JT. If you create a new workflow it should work until the next JT failover. Do new workflows work for you? I will take a look at your other question as well. Thanks Chris
... View more
10-24-2013
01:21 PM
1 Kudo
Hey Markovich, Are you running the workflow from inside Hue or from the Oozie command line? If from within Hue, then you are running into HUE-1631 and it won't be fixed until a later release of CDH. If you are not using Hue, can you attach your job.properties file and workflow? Thanks Chris
... View more
10-16-2013
10:31 AM
Hello, Just to clarify, are you trying to do: sudo apt-get install solr-server and not manager or: sudo apt-get install solr-server Can you attach the contents of this file: /etc/apt/sources.list Thanks Chris
... View more
10-02-2013
09:40 AM
Can you login to any host and run: [cconner@cdh44-1 ~]$ zookeeper-client [zk: localhost:2181(CONNECTED) 0] [zk: localhost:2181(CONNECTED) 0] ls /solr/collections [collection2, collection1] [zk: localhost:2181(CONNECTED) 1] ls /solr/collections/collection1 [leader_elect, leaders] If collection1 is there, you should see it when you enter the ZK client and run ls /solr/collections. Thanks Chris
... View more
09-19-2013
07:34 AM
1 Kudo
Hello, It seems like it is timing out for some reason. Can you try the following: - First make sure that you can go to the Solr Admin UI on one host: http://<hostname>:8983/solr/ - Once you confirmed that UI works. Then try: solrctl --solr http://<hostname>:8983/solr collection --create collection1 -s 2 If that works, then it seems whichever Solr instance solrctl is getting from ZK is having an issue. Perhaps check each UI to see if any aren't working or restart them all and try again? Hope this helps. Thanks Chris
... View more
09-18-2013
07:53 AM
1 Kudo
Hey Srini, When you first created the collection, you ran a command similar to: - solrctl instancedir --generate $HOME/solr_configs This command was what created the configs that you imported for solr. You can see the various configs by running: - solrctl instancedir --list Then pick the config that goes with your collection and run: - solrctl instancedir --get <name_from_list> /path/to/local_fs Then change the schema.xml in /path/to/local_fs/conf and run: - solrctl instancedir --update <name_from_list> /path/to/local_fs Then the schema will get updated. Note, when you update the schema, you have to reindex all your documents or else they won't have indexes for the latest schema changes. Hope this helps. Thanks Chris
... View more
08-14-2013
08:03 AM
Hello, You have two options here: - As dvohra mentioned, you could copy the morphline.conf file to /etc/flume-ng/conf and then give a full path in your agent config in CM. Then restart. - If you are wanting to use the "Flume-NG Solr Sink" config section in CM to config your morphlines, then you need to change "morphline.conf" to "morphlines.com", notice the "s" at the end. Here is my agent config as an example: avro.sources=src avro.sinks=solrSink avro.channels=memoryChannel avro.sources.src.type=avro avro.sources.src.bind=cdh43-1.test.com avro.sources.src.port=8889 avro.sinks.solrSink.type=org.apache.flume.sink.solr.morphline.MorphlineSolrSink avro.sinks.solrSink.channel=memoryChannel avro.sinks.solrSink.morphlineFile=morphlines.conf avro.channels.memoryChannel.type=memory avro.channels.memoryChannel.capacity=4096 avro.channels.memoryChannel.transactionCapacity=100 avro.channels.memoryChannel.byteCapacity=0 avro.sources.src.channels=memoryChannel You can also see the morphlines file that CM is trying to use by looking in "/var/run/cloudera-scm-agent/process/<id>-flume-AGENT" where <id> is the most recently created. You'll see a "morphlines.conf". Hope this helps... Thanks Chris
... View more
- « Previous
- Next »