Member since
11-28-2016
20
Posts
0
Kudos Received
0
Solutions
07-11-2017
06:23 AM
thanks for quick reply. it try to modify it manually.
... View more
07-10-2017
08:32 AM
i m using HDP-2.5,ambari-2.4.2.0 and solr-5.3.0 which is on kerberos. I m getting error when i create collection. to create collection used this command solr_Home$/bin/solr create -c SolrCollection1 -d data_driven_schema_configs -n mySolrConfigs -s 2 -rf 2 ERROR: Failed to create collection 'SolrCollection1' due to: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://hostname:8983/solr: Error CREATEing SolrCore 'SolrCollection1_shard2_replica2': Unable to create core [SolrCollection1_shard2_replica2] Caused by: Found 2 configuration sections when at most 1 is allowed matching expression: directoryFactory Please help me to figure out where did i mistake this is my log file ERROR (OverseerThreadFactory-5-thread-2-processing-n:hostname:8983_solr) [ ] o.a.s.c.OverseerCollectionProcessor Error f
rom shard: http://hostname:8983/solr
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hostname:8983/solr: Error CREATE
ing SolrCore 'SolrCollection1_shard1_replica2': Unable to create core [SolrCollection1_shard1_replica2] Caused by: Found 2 configurati
on sections when at most 1 is allowed matching expression: directoryFactory
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at org.apache.solr.handler.component.HttpShardHandler$1.call(HttpShardHandler.java:216)
at org.apache.solr.handler.component.HttpShardHandler$1.call(HttpShardHandler.java:181)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:210)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
1511013 INFO (OverseerThreadFactory-5-thread-2-processing-n:hostname:8983_solr) [ ] o.a.s.c.OverseerCollectionProcessor Finishe
d create command on all shards for collection: SolrCollection1
1511015 INFO (OverseerStateUpdate-242386254059733211-hostname:8983_solr-n_0000000010) [ ] o.a.s.c.Overseer processMessage: queu
eSize: 3, message = {
"operation":"deletecore",
"core":"SolrCollection1_shard2_replica2",
"node_name":"hostname:8983_solr",
"collection":"SolrCollection1",
"core_node_name":"core_node2"} current state version: 23
1511017 INFO (OverseerStateUpdate-242386254059733211-hostname:8983_solr-n_0000000010) [ ] o.a.s.c.o.ZkStateWriter going to upda
te_collection /collections/SolrCollection1/state.json version: 2
1511020 INFO (OverseerStateUpdate-242386254059733211-hostname:8983_solr-n_0000000010) [ ] o.a.s.c.Overseer processMessage: queu
eSize: 3, message = {
"operation":"deletecore",
"core":"SolrCollection1_shard2_replica1",
"node_name":"hostname:8983_solr",
"node_name":"hostname:8983_solr",
"collection":"SolrCollection1",
"core_node_name":"core_node4"} current state version: 23
1511022 INFO (OverseerStateUpdate-242386254059733211-hostname:8983_solr-n_0000000010) [ ] o.a.s.c.o.ZkStateWriter going to update_collection /collections/SolrCollection1/state.json version: 3
1511022 INFO (zkCallback-4-thread-2-processing-n:hostname:8983_solr) [ ] o.a.s.c.DistributedQueue NodeChildrenChanged fired on path /overseer/queue state SyncConnected
1511027 INFO (OverseerThreadFactory-5-thread-2-processing-n:hostname:8983_solr) [ ] o.a.s.c.OverseerCollectionProcessor Overseer Collection Processor: Message id:/overseer/collection-queue-work/qn-0000000028 complete, response:{failure={null=org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://hostname:8983/solr: Error CREATEing SolrCore 'SolrCollection1_shard1_replica1': Unable to create core [SolrCollection1_shard1_replica1] Caused by: Found 2 configuration sections when at most 1 is allowed matching expression: directoryFactory,null=org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://hostname:8983/solr: Error CREATEing SolrCore 'SolrCollection1_shard2_replica2': Unable to create core [SolrCollection1_shard2_replica2] Caused by: Found 2 configuration sections when at most 1 is allowed matching expression: directoryFactory,null=org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://hostname:8983/solr: Error CREATEing SolrCore 'SolrCollection1_shard2_replica1': Unable to create core [SolrCollection1_shard2_replica1] Caused by: Found 2 configuration sections when at most 1 is allowed matching expression: directoryFactory,null=org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://hostname:8983/solr: Error CREATEing SolrCore 'SolrCollection1_shard1_replica2': Unable to create core [SolrCollection1_shard1_replica2] Caused by: Found 2 configuration sections when at most 1 is allowed matching expression: directoryFactory}}
1511029 INFO (zkCallback-4-thread-2-processing-n:hostname:8983_solr) [ ] o.a.s.c.DistributedQueue NodeDataChanged fired on path /overseer/collection-queue-work/qnr-0000000028 state SyncConnected
1511032 INFO (zkCallback-4-thread-2-processing-n:hostname:8983_solr) [ ] o.a.s.c.DistributedQueue NodeChildrenChanged fired on path /overseer/collection-queue-work state SyncConnected
1511044 INFO (OverseerStateUpdate-242386254059733211-hostname:8983_solr-n_0000000010) [ ] o.a.s.c.Overseer processMessage: queueSize: 1, message = {
"operation":"deletecore",
"core":"SolrCollection1_shard1_replica2",
"node_name":"hostname:8983_solr",
"collection":"SolrCollection1",
"core_node_name":"core_node3"} current state version: 23
1511044 INFO (qtp1450821318-20) [ ] o.a.s.s.SolrDispatchFilter [admin] webapp=null path=/admin/collections params={replicationFactor=2&maxShardsPerNode=4&collection.configName=mySolrConfigs&name=SolrCollection1&action=CREATE&numShards=2&wt=json} status=0 QTime=1622
1511047 INFO (zkCallback-4-thread-2-processing-n:hostname:8983_solr) [ ] o.a.s.c.DistributedQueue NodeChildrenChanged fired on path /overseer/queue state SyncConnected
1511052 INFO (zkCallback-4-thread-2-processing-n:hostname:8983_solr) [ ] o.a.s.c.c.ZkStateReader A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 1)
1511054 INFO (zkCallback-4-thread-2-processing-n:hostname:8983_solr) [ ] o.a.s.c.c.ZkStateReader Updated cluster state version :
thanks in advance.
... View more
Labels:
06-30-2017
04:18 AM
yes i got the result.you can see. [hdusr@vfarm01d hadoop]$ hadoop classpath
/opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/*:/opt/hadoop/share/hadoop/common/*:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/*:/opt/hadoop/share/hadoop/hdfs/*:/opt/hadoop/share/hadoop/yarn/lib/*:/opt/hadoop/share/hadoop/yarn/*:/opt/hadoop/share/hadoop/mapreduce/lib/*:/opt/hadoop/share/hadoop/mapreduce/*:/opt/hadoop/contrib/capacity-scheduler/*.jar
and i have set as you said export CLASSPATH=`hadoop classpath`
but still getting same error.
... View more
06-30-2017
02:53 AM
i have set this way of my classpath. Is this right way? if i m wrong plz guide me for right way. export HADOOP_CLASSPATH=/opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/*:/opt/hadoop/share/hadoop/common/*:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/*:/opt/hadoop/share/hadoop/hdfs/*:/opt/hadoop/share/hadoop/yarn/lib/*:/opt/hadoop/share/hadoop/yarn/*:/opt/hadoop/share/hadoop/mapreduce/lib/*:/opt/hadoop/share/hadoop/mapreduce/*:/opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/*:/opt/hadoop/share/hadoop/common/*:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/*:/opt/hadoop/share/hadoop/hdfs/*:/opt/hadoop/share/hadoop/yarn/lib/*:/opt/hadoop/share/hadoop/yarn/*:/opt/hadoop/share/hadoop/mapreduce/lib/*:/opt/hadoop/share/hadoop/mapreduce/*:/opt/hadoop/share/hadoop/*::/opt/hadoop/contrib/capacity-scheduler/*.jar
... View more
06-30-2017
01:22 AM
on ubuntu used hadoop-2.7.3 version and in centos used hadoop-2.7.2 version.
... View more
06-28-2017
02:43 PM
I m using hadoop-2.7.2. using CentOS-7 (64 Bit). Trying to extract the pdf file using apache-tika. when i execute this code on eclipse i m able extacted file then i created a jar and exceuted that jar on hadoop in ubuntu machine its also work fine.but when i used that jar in ambari installed hadoop with same code. i m getting this error Error: java.net.MalformedURLException: unknown protocol: hdfs. i don't understand why its not working.i tried alot to resolve this issue but failed. plz someone help me. this is my code package tikka.com;
import java.io.IOException;
import java.net.URL;
import java.util.Date;
import org.apache.commons.compress.utils.IOUtils.*;
import org.tukaani.xz.ARMOptions.*;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.FsUrlStreamHandlerFactory;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
public class TikaMapreduce extends Configured implements Tool
{
public static class TikaMapper extends Mapper<Text, Text, Text, Text>
{
public void map(Text key, Text value, Context context)
throws IOException, InterruptedException
{
context.write(key, value);
}
}
public static void main(String[] args) throws Exception
{
int exit = ToolRunner.run(new Configuration(), new TikaMapreduce(),
args);
System.exit(exit);
}
@Override
public int run(String[] args) throws Exception
{
Configuration conf = new Configuration();
FileSystem hdfs = FileSystem.get(conf);
URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory());
Job job = new Job(conf, "TikaMapreduce");
job.setJarByClass(getClass());
job.setJobName("TikRead");
job.setInputFormatClass(TikaFileInputFormat.class);
System.out.println("read input file");
FileInputFormat.addInputPath(job, new Path("hdfs://hostname:8020/input-data/pwc-canada-issues-reporting-form.pdf"));
job.setMapperClass(TikaMapper.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.setOutputFormatClass(TikaOutPutFormt.class);
FileOutputFormat.setOutputPath(job, new Path("hdfs://hostname:8020/output-data/pdfoutput"+(new Date().getTime())));
System.out.println("output pdf");
return job.waitForCompletion(true) ? 0 : 1;
}
}
package tikka.com;
import java.io.IOException;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
public class TikaFileInputFormat extends FileInputFormat<Text, Text>
{
@Override
public RecordReader<Text, Text> createRecordReader(InputSplit split,
TaskAttemptContext context) throws IOException, InterruptedException {
// TODO Auto-generated method stub
//return new TikaRecordReader();
return new TikaRecordReader();
}
}
package tikka.com;
import java.io.IOException;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.RecordWriter;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class TikaOutPutFormt extends FileOutputFormat<Text, Text>
{
@Override
public RecordWriter<Text, Text> getRecordWriter(TaskAttemptContext context)
throws IOException, InterruptedException {
// TODO Auto-generated method stub
Path path=FileOutputFormat.getOutputPath(context);
Path fullapth=new Path(path,"PDF.txt");
FileSystem fs=path.getFileSystem(context.getConfiguration());
FSDataOutputStream output=fs.create(fullapth,context);
return new TikaRecordWrite(output);
}
}
package tikka.com;
import java.io.IOException;
import java.net.URL;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
import org.apache.tika.Tika;
import org.apache.tika.exception.TikaException;
public class TikaRecordReader extends RecordReader<Text, Text>
{
private Text key = new Text();
private Text value = new Text();
private FileSplit fileSplit;
private Configuration conf;
private boolean processed = false;
@Override
public void close() throws IOException
{
// TODO Auto-generated method stub
}
@Override
public Text getCurrentKey() throws IOException, InterruptedException
{
// TODO Auto-generated method stub
return key;
}
@Override
public Text getCurrentValue() throws IOException, InterruptedException
{
// TODO Auto-generated method stub
return value;
}
@Override
public float getProgress() throws IOException, InterruptedException
{
// TODO Auto-generated method stub
return processed ? 1.0f : 0.0f;
}
@Override
public void initialize(InputSplit split, TaskAttemptContext context)
throws IOException, InterruptedException
{
// TODO Auto-generated method stub
this.fileSplit = (FileSplit) split;
this.conf = context.getConfiguration();
}
@Override
public boolean nextKeyValue() throws IOException, InterruptedException
{
// TODO Auto-generated method stub
if (!processed) {
Path path = fileSplit.getPath();
key.set(path.toString());
@SuppressWarnings("unused")
FileSystem fs = path.getFileSystem(conf);
@SuppressWarnings("unused")
FSDataInputStream fin = null;
try
{
String con = new Tika().parseToString(new URL(path.toString()));
String string = con.replaceAll("[$%&+,:;=?#|']", " ");
String string2 = string.replaceAll("\\s+", " ");
String lo = string2.toLowerCase();
value.set(lo);
} catch (TikaException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
processed = true;
return true;
}
else
{
return false;
}
}
}
package tikka.com;
import java.io.DataOutputStream;
import java.io.IOException;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.RecordWriter;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
public class TikaRecordWrite extends RecordWriter<Text, Text>
{
private DataOutputStream out;
public TikaRecordWrite(DataOutputStream output) {
// TODO Auto-generated constructor stub
out=output;
try {
out.writeBytes("result:\r\n");
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
@Override
public void close(TaskAttemptContext context) throws IOException,
InterruptedException {
// TODO Auto-generated method stub
out.close();
}
@Override
public void write(Text key, Text value) throws IOException,
InterruptedException {
// TODO Auto-generated method stub
out.writeBytes(key.toString());
out.writeBytes(",");
out.writeBytes(value.toString());
out.writeBytes("\r\n");
}
}
output after executed jar file read input file
output pdf
17/06/28 02:29:10 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
17/06/28 02:29:11 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
17/06/28 02:29:12 INFO input.FileInputFormat: Total input paths to process : 1
17/06/28 02:29:12 INFO mapreduce.JobSubmitter: number of splits:1
17/06/28 02:29:12 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1498630846170_0002
17/06/28 02:29:13 INFO impl.YarnClientImpl: Submitted application application_1498630846170_0002
17/06/28 02:29:13 INFO mapreduce.Job: The url to track the job: https://hostname:8088/proxy/application_1498630846170_0002/
17/06/28 02:29:13 INFO mapreduce.Job: Running job: job_1498630846170_0002
17/06/28 02:29:23 INFO mapreduce.Job: Job job_1498630846170_0002 running in uber mode : false
17/06/28 02:29:23 INFO mapreduce.Job: map 0% reduce 0%
17/06/28 02:29:51 INFO mapreduce.Job: Task Id : attempt_1498630846170_0002_m_000000_0, Status : FAILED
Error: java.net.MalformedURLException: unknown protocol: hdfs
at java.net.URL.<init>(URL.java:600)
at java.net.URL.<init>(URL.java:490)
at java.net.URL.<init>(URL.java:439)
at tikka.com.TikaRecordReader.nextKeyValue(TikaRecordReader.java:77)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
17/06/28 02:30:09 INFO mapreduce.Job: Task Id : attempt_1498630846170_0002_m_000000_1, Status : FAILED
Error: java.net.MalformedURLException: unknown protocol: hdfs
at java.net.URL.<init>(URL.java:600)
at java.net.URL.<init>(URL.java:490)
at java.net.URL.<init>(URL.java:439)
at tikka.com.TikaRecordReader.nextKeyValue(TikaRecordReader.java:77)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
17/06/28 02:30:24 INFO mapreduce.Job: Task Id : attempt_1498630846170_0002_m_000000_2, Status : FAILED
Error: java.net.MalformedURLException: unknown protocol: hdfs
at java.net.URL.<init>(URL.java:600)
at java.net.URL.<init>(URL.java:490)
at java.net.URL.<init>(URL.java:439)
at tikka.com.TikaRecordReader.nextKeyValue(TikaRecordReader.java:77)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
17/06/28 02:30:39 INFO mapreduce.Job: map 100% reduce 0%
17/06/28 02:30:40 INFO mapreduce.Job: map 100% reduce 100%
17/06/28 02:30:44 INFO mapreduce.Job: Job job_1498630846170_0002 failed with state FAILED due to: Task failed task_1498630846170_0002_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
17/06/28 02:30:44 INFO mapreduce.Job: Counters: 13
Job Counters
Failed map tasks=4
Killed reduce tasks=1
Launched map tasks=4
Other local map tasks=3
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=67789
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=67789
Total time spent by all reduce tasks (ms)=0
Total vcore-milliseconds taken by all map tasks=67789
Total vcore-milliseconds taken by all reduce tasks=0
Total megabyte-milliseconds taken by all map tasks=69415936
Total megabyte-milliseconds taken by all reduce tasks=0
thank in advance.
... View more
Labels:
06-16-2017
10:20 AM
I m trying to create collection on hdfs using kerberos but not solr we are trying to create collection on the cloud mode in hdfs (which is on kerberos). so is this way we are able to create collection or do we need to run solr on kerberos mode too.? Any help would be much appreciate Thanks in advance. when i create collection i m facing this issue.. Connecting to ZooKeeper at hostname:2181,hostname:2181,hostname:2181 ...
Re-using existing configuration directory mySolrConfigs
Creating new collection 'SolrCollection' using command:
http://localhost:8983/solr/admin/collections?action=CREATE&name=SolrCollection&numShards=2&replicationFactor=2&maxShardsPerNode=4&collection.configName=mySolrConfigs
ERROR: Failed to create collection 'SolrCollection' due to: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://hostname:8983/solr: Error CREATEing SolrCore 'SolrCollection_shard2_replica1': Unable to create core [SolrCollection_shard2_replica1] Caused by: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]
this is my updated solrconfig.xml <directoryFactory name="DirectoryFactory" class="solr.HdfsDirectoryFactory">
<str name="solr.hdfs.home">hdfs://hostname:8020/user/solr/test1data</str>
<bool name="solr.hdfs.blockcache.enabled">true</bool>
<str name="solr.hdfs.confdir">/etc/hadoop/conf</str>
<int name="solr.hdfs.blockcache.slab.count">1</int>
<bool name="solr.hdfs.blockcache.direct.memory.allocation">true</bool>
<int name="solr.hdfs.blockcache.blocksperbank">16384</int>
<bool name="solr.hdfs.blockcache.read.enabled">true</bool>
<bool name="solr.hdfs.nrtcachingdirectory.enable">true</bool>
<int name="solr.hdfs.nrtcachingdirectory.maxmergesizemb">16</int>
<int name="solr.hdfs.nrtcachingdirectory.maxcachedmb">192</int>
<bool name="solr.hdfs.security.kerberos.enabled">yes</bool>
<str name="solr.hdfs.security.kerberos.keytabfile">/etc/security/keytabs/nm.service.keytab</str>
<str name="solr.hdfs.security.kerberos.principal">krbtgt/EXAMPLE.COM@EXAMPLE.COM</str>
</directoryFactory>
#Using command to create collection $./bin/solr create -c SolrCollection -d data_driven_schema_configs -n mySolrConfigs -s 2 -rf 2 #Using command to start solr on hdfs mode ./bin/solr start -c -p 8983 -z hostname:2181,hostname:2181,hostname:2181 -Dsolr.directoryFactory=HdfsDirectoryFactory -Dsolr.lock.type=hdfs -Dsolr.hdfs.home=hdfs://hostname:8020/user/solr/test1data
... View more
Labels:
05-25-2017
02:19 PM
Thank you for response quickly.
... View more
05-25-2017
07:57 AM
i want integrate different database application and services like neo4j, apache manifold-cf with ambari. If it is possible plz provide me link or documentation. Thanks in advance.
... View more
Labels:
05-18-2017
06:55 AM
# Generated by Apache Ambari. Thu May 18 02:42:00 2017
atlas.audit.hbase.tablename=ATLAS_ENTITY_AUDIT_EVENTS
atlas.audit.hbase.zookeeper.quorum=localhost4,localhost2,localhost3
atlas.audit.zookeeper.session.timeout.ms=1500
atlas.auth.policy.file=/etc/atlas/conf/policy-store.txt
atlas.authentication.keytab=/etc/security/keytabs/atlas.service.keytab
atlas.authentication.method.file=true
atlas.authentication.method.file.filename=/etc/atlas/conf/users-credentials.properties
atlas.authentication.method.kerberos=false
atlas.authentication.method.ldap=false
atlas.authentication.method.ldap.ad.base.dn=
atlas.authentication.method.ldap.ad.bind.dn=
atlas.authentication.method.ldap.ad.bind.password=
atlas.authentication.method.ldap.ad.default.role=ROLE_USER
atlas.authentication.method.ldap.ad.domain=
atlas.authentication.method.ldap.ad.referral=ignore
atlas.authentication.method.ldap.ad.url=
atlas.authentication.method.ldap.ad.user.searchfilter=(sAMAccountName={0})
atlas.authentication.method.ldap.base.dn=
atlas.authentication.method.ldap.bind.dn=
atlas.authentication.method.ldap.bind.password=
atlas.authentication.method.ldap.default.role=ROLE_USER
atlas.authentication.method.ldap.groupRoleAttribute=cn
atlas.authentication.method.ldap.groupSearchBase=
atlas.authentication.method.ldap.groupSearchFilter=
atlas.authentication.method.ldap.referral=ignore
atlas.authentication.method.ldap.type=none
atlas.authentication.method.ldap.url=
atlas.authentication.method.ldap.user.searchfilter=
atlas.authentication.method.ldap.userDNpattern=uid=
atlas.authentication.principal=atlas
atlas.authorizer.impl=simple
atlas.client.connectTimeoutMSecs=60000
atlas.client.readTimeoutMSecs=60000
atlas.cluster.name=hadoop
atlas.enableTLS=true
atlas.graph.index.search.backend=solr5
atlas.graph.index.search.solr.mode=cloud
atlas.graph.index.search.solr.zookeeper-url=localhost3:2181/infra-solr,localhost2:2181/infra-solr,localhost1/infra-solr
atlas.graph.storage.backend=hbase
atlas.graph.storage.hbase.table=atlas_titan
atlas.graph.storage.hostname=localhost4,localhost2,localhost3
atlas.kafka.auto.commit.enable=true
atlas.kafka.auto.commit.interval.ms=1000
atlas.kafka.bootstrap.servers=localhost2:6667,localhost3:9092
atlas.kafka.data==${sys:atlas.home}/data/kafka
atlas.kafka.hook.group.id=atlas
atlas.kafka.zookeeper.connect=localhost2:2181,localhost3:2181,localhost4:2181
atlas.kafka.zookeeper.connection.timeout.ms=250
atlas.kafka.zookeeper.session.timeout.ms=450
atlas.kafka.zookeeper.sync.time.ms=25
atlas.lineage.schema.query.hive_table=hive_table where __guid='%s'\, columns
atlas.lineage.schema.query.Table=Table where __guid='%s'\, columns
atlas.notification.create.topics=true
atlas.notification.embedded=true
atlas.notification.replicas=1
atlas.notification.topics=ATLAS_HOOK,ATLAS_ENTITIES
atlas.rest.address=https://localhost2:21443
atlas.server.address.id1=localhost2:21443
atlas.server.bind.address=localhost2
atlas.server.ha.enabled=false
atlas.server.http.port=21000
atlas.server.https.port=21443
atlas.server.ids=id1
atlas.solr.kerberos.enable=false
cert.stores.credential.provider.path=jceks://file/atlas.jceks
client.auth.enabled=false
keystore.file=/etc/cert/keystore.jks
truststore.file=/etc/cert/keystore.jks
... View more
05-18-2017
06:55 AM
# Generated by Apache Ambari. Thu May 18 02:42:00 2017
atlas.audit.hbase.tablename=ATLAS_ENTITY_AUDIT_EVENTS
atlas.audit.hbase.zookeeper.quorum=localhost4,localhost2,localhost3
atlas.audit.zookeeper.session.timeout.ms=1500
atlas.auth.policy.file=/etc/atlas/conf/policy-store.txt
atlas.authentication.keytab=/etc/security/keytabs/atlas.service.keytab
atlas.authentication.method.file=true
atlas.authentication.method.file.filename=/etc/atlas/conf/users-credentials.properties
atlas.authentication.method.kerberos=false
atlas.authentication.method.ldap=false
atlas.authentication.method.ldap.ad.base.dn=
atlas.authentication.method.ldap.ad.bind.dn=
atlas.authentication.method.ldap.ad.bind.password=
atlas.authentication.method.ldap.ad.default.role=ROLE_USER
atlas.authentication.method.ldap.ad.domain=
atlas.authentication.method.ldap.ad.referral=ignore
atlas.authentication.method.ldap.ad.url=
atlas.authentication.method.ldap.ad.user.searchfilter=(sAMAccountName={0})
atlas.authentication.method.ldap.base.dn=
atlas.authentication.method.ldap.bind.dn=
atlas.authentication.method.ldap.bind.password=
atlas.authentication.method.ldap.default.role=ROLE_USER
atlas.authentication.method.ldap.groupRoleAttribute=cn
atlas.authentication.method.ldap.groupSearchBase=
atlas.authentication.method.ldap.groupSearchFilter=
atlas.authentication.method.ldap.referral=ignore
atlas.authentication.method.ldap.type=none
atlas.authentication.method.ldap.url=
atlas.authentication.method.ldap.user.searchfilter=
atlas.authentication.method.ldap.userDNpattern=uid=
atlas.authentication.principal=atlas
atlas.authorizer.impl=simple
atlas.client.connectTimeoutMSecs=60000
atlas.client.readTimeoutMSecs=60000
atlas.cluster.name=hadoop
atlas.enableTLS=true
atlas.graph.index.search.backend=solr5
atlas.graph.index.search.solr.mode=cloud
atlas.graph.index.search.solr.zookeeper-url=localhost3:2181/infra-solr,localhost2:2181/infra-solr,localhost1/infra-solr
atlas.graph.storage.backend=hbase
atlas.graph.storage.hbase.table=atlas_titan
atlas.graph.storage.hostname=localhost4,localhost2,localhost3
atlas.kafka.auto.commit.enable=true
atlas.kafka.auto.commit.interval.ms=1000
atlas.kafka.bootstrap.servers=localhost2:6667,localhost3:9092
atlas.kafka.data==${sys:atlas.home}/data/kafka
atlas.kafka.hook.group.id=atlas
atlas.kafka.zookeeper.connect=localhost2:2181,localhost3:2181,localhost4:2181
atlas.kafka.zookeeper.connection.timeout.ms=250
atlas.kafka.zookeeper.session.timeout.ms=450
atlas.kafka.zookeeper.sync.time.ms=25
atlas.lineage.schema.query.hive_table=hive_table where __guid='%s'\, columns
atlas.lineage.schema.query.Table=Table where __guid='%s'\, columns
atlas.notification.create.topics=true
atlas.notification.embedded=true
atlas.notification.replicas=1
atlas.notification.topics=ATLAS_HOOK,ATLAS_ENTITIES
atlas.rest.address=https://localhost2:21443
atlas.server.address.id1=localhost2:21443
atlas.server.bind.address=localhost2
atlas.server.ha.enabled=false
atlas.server.http.port=21000
atlas.server.https.port=21443
atlas.server.ids=id1
atlas.solr.kerberos.enable=false
cert.stores.credential.provider.path=jceks://file/atlas.jceks
client.auth.enabled=false
keystore.file=/etc/cert/keystore.jks
truststore.file=/etc/cert/keystore.jks
... View more
05-16-2017
06:52 PM
yes hbase,kafka and ambari infra is running..and work fine..but i m getting error in atlas only.
... View more
05-16-2017
06:50 PM
thank you for response. I have mention there with hostname:2181,hostname:2181,hostname:2181 ..i think which is right value.and the error is still same.let me know if i did mistake and help me to fix this issue.or please provide me any link which ll to fix this issue.
... View more
05-16-2017
10:58 AM
04-08-2017
06:12 AM
Hi, I m using Hue-3.11.0,hadoop-2.7.3 and solr-6.2.1.I have design collection and index data inside and trying to search word inside document which is available but it gives output like Your search did not match any documents. i found result when i search full word like samsung it gives result of samsung name etc but when i search samso it didn't found any result or not suggesting anyword keyword which is match with samsung. you can see in img.1 and 2 i just want to know is it possible in search to do that kinds of search.like autosuggestion and autocorrection/spell check.if it is possible please provide me link or any documention. thanks in advance.
... View more
04-03-2017
12:18 PM
Hi I m using hue-3.11.0 ,solr-6.2.1 and hadoop-2.7.3. I m facing issue while selecting +index option search drop down. Error coming is server could not be contacted properly: 'znode' I have done configuration in hue.ini in [Indexer] enable new index=true and in tmp/smar_indexer_lib i have put the jar file there too.and started solr pointing to zookeepr like bin/solr start -c -z localhost:2181 and it works fine and zookeeper is also started manually but getting issue in hue ..plz help me to come out of this .. see error in img. thanks in advance.
... View more
03-08-2017
09:28 PM
Thank you. Is there any way to upgrade HDP2.3.0 to HDP2.5.3 or Ambari 2.1.1 to 2.4.2 directly.or do we need to install it from beginning.
... View more
03-08-2017
09:30 AM
Hi,
I am using HDP-2.3.0.0-2557 and Ambari-2.1.1. I have completed my hadoop installation on ambari after that i had added hbase and other services but when i go start services of hbase i found hbase installed failed
i tried to reinstalled but it give error installtion fail with commond to run like '/usr/bin/yum -d 0 -e 0 -y install 'hbase_2_3_*'. i had run cmd but its gives error. you can see screenshot.
but when i checked in admin in stack and version. there is hbase installed. see screenshot please someone help me to come out this issue. thanks in advance.
... View more
11-28-2016
11:43 PM
Hi everyone, I am new in hue.I am using mapr distribution.there is hue installed by deafult 3.9.0 version. I have configured every things in hue.ini for solr search but i can't see the serach option on hue UI.please someone help me to resolve this issue or if there any doucumentaion for install search application please provide. Thank in advance.
... View more
Labels: