Member since
03-17-2017
32
Posts
1
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2409 | 04-18-2017 06:16 AM | |
37163 | 04-10-2017 08:29 AM | |
32869 | 04-03-2017 07:53 AM |
08-17-2017
10:44 AM
I added the following UserGroupInformation.setConfiguration(conf); UserGroupInformation.loginUserFromKeytab("myId@OurCompany.ORG", "/myPathtoMyKeyTab/my.keytab") I was able to connect and get a list of the files in the HSFS directory, however the write operation failed with the following exception: java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:197) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) at java.io.FilterInputStream.read(FilterInputStream.java:83) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2270) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1701) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1620) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:772) 17/08/17 13:31:49 WARN hdfs.DFSClient: Abandoning BP-2081783877-10.91.61.102-1496699348717:blk_1074056717_315940 17/08/17 13:31:49 WARN hdfs.DFSClient: Excluding datanode DatanodeInfoWithStorage[10.91.61.106:50010,DS-caf46aea-ebbb-4d8b-8ded-2e476bb0acee,DISK] Any ideas? Pointers, help is appreciated.
... View more
08-16-2017
01:04 PM
I added the following 2 statements: conf.addResource("/etc/hadoop/conf.cloudera.hdfs/core-site.xml"); conf.addResource("/etc/hadoop/conf.cloudera.hdfs/hdfs-site.xml"); I also created a jar and ran the program from an edge node: java -Djava.security.auth.login.config=/security/jaas.conf -Djava.security.krb5.conf=/security/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false -jar spring-data-hadoop-all-1.0.jar Here are the contents of my jaas.conf: Client { com.sun.security.auth.module.Krb5LoginModule required doNotPrompt=true useTicketCache=false principal="iapima@AOC.NCCOURTS.ORG" useKeyTab=true keyTab="/home/iapima/security/iapima.keytab" debug=true; }; I am still getting the following exception: Exception in thread "main" org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2103) at org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.<init>(DistributedFileSystem.java:887) at org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.<init>(DistributedFileSystem.java:870) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:815) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:811) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.listLocatedStatus(DistributedFileSystem.java:811) at org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1742) at org.apache.hadoop.fs.FileSystem$5.<init>(FileSystem.java:1863) at org.apache.hadoop.fs.FileSystem.listFiles(FileSystem.java:1860) at org.nccourts.hadoop.hdfs.AccessHdfs.main(AccessHdfs.java:34) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): -- From the command line on the edge node, where I ran the java program, I am able to all kind of manipulattion on HDFS: creating dir, coping files, deleting files.. etc It is very frustring.. I can access secured impala, secured solr on our cluster.. but I cannot seem to be able to access the hdfs file system.
... View more
08-15-2017
11:00 AM
I tried several ways to access HDFS on our Kerneros secured CDH5.10 cluser, but to no avail. Below is the simple Java code that I tried run from Eclipse on windows:
public static void main(final String[] args) throws IOException { final Configuration conf = new Configuration(); conf.set("fs.defaultFS", "www..../"); conf.set("hadoop.security.authentication", "kerberos"); final FileSystem fs = FileSystem.get(conf); final RemoteIterator<LocatedFileStatus> files = fs.listFiles(new Path("/hdfs/data-lake/prod/cvprod/csv"), true); while (files.hasNext()) { final LocatedFileStatus fileStatus = files.next(); // do stuff with the file like ... System.out.println(fileStatus.getPath()); } byte[] contents = createContents(); String pathName = "/hdfs/data-lake/test/myfile.txt"; FSDataOutputStream output = fs.create(new Path(pathName)); output.write(contents); output.flush(); output.close(); } static byte[] createContents() { String contents = "This is a test of creating a file on hdfs"; return contents.getBytes(); } }
I ran the program with the following VM flags:
-Djava.security.auth.login.config=c:/iapima/jaas.conf -Djava.security.krb5.conf=c:/iapima/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false
I keep getting the following error:
Exception in thread "main" org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]
Any help or pointer is apprciated.
... View more
04-18-2017
06:16 AM
By adding the annotation to my model(document) the problem was resolved. @SolrDocument(solrCoreName = "party_name") public class PartyName { .... }
... View more
04-13-2017
06:32 AM
I am using spring-data-solr to query indexed solr data running on a Cloudera Hadoop cluser(CDH5.10) . The name of my collection is party_name. Below is the code I used to configure the Cloud client: @Configuration @EnableSolrRepositories(basePackages = { "org.nccourts.repository" }, multicoreSupport = true) public class SpringSolrConfig { @Value("${spring.data.solr.zk-host}")
private String zkHost;
@Bean
public SolrClient solrClient() {
return new CloudSolrClient(zkHost);
}
@Bean
public SolrTemplate solrTemplate(CloudSolrClient solrClient) throws Exception {
solrClient.setDefaultCollection("party_name");
return new SolrTemplate(solrClient);
} } When I run my junit test, I am getting the following exception: org.springframework.data.solr.UncategorizedSolrException: Collection not found: partyname; nested exception is org.apache.solr.common.SolrException: Collection not found: partyname at org.springframework.data.solr.core.SolrTemplate.execute(SolrTemplate.java:215) at org.springframework.data.solr.core.SolrTemplate.executeSolrQuery(SolrTemplate.java:1030) Note the Collection not found: partyname, but the collection name I entered is party_name. I am using springboot version 1.5.2 with the following dependency: compile('org.springframework.boot:spring-boot-starter-data-solr') Any help or pointers are appreciated.
... View more
Labels:
- Labels:
-
Apache Solr
-
Cloudera Search
04-11-2017
07:22 AM
By fixing the Zookeeper string as suggested: "host1:port,host2:port,host3:port/solr", the problem was fixed. Thanks.
... View more
04-11-2017
06:17 AM
I received the following exception when trying to query my collection: 2017-04-11 08:28:36,967 INFO main.waitForConnected - Waiting for client to connect to ZooKeeper 2017-04-11 08:28:37,004 INFO main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).logStartConnect - Opening socket connection to server dwh-mst-dev01.stor.nccourts.org/10.91.62.104:2181. Will not attempt to authenticate using SASL (unknown error) 2017-04-11 08:28:37,031 INFO main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).primeConnection - Socket connection established to dwh-mst-dev01.stor.nccourts.org/10.91.62.104:2181, initiating session 2017-04-11 08:28:37,035 DEBUG main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).primeConnection - Session establishment request sent on dwh-mst-dev01.stor.nccourts.org/10.91.62.104:2181 2017-04-11 08:28:37,074 INFO main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).onConnected - Session establishment complete on server dwh-mst-dev01.stor.nccourts.org/10.91.62.104:2181, sessionid = 0x15ad7740272589b, negotiated timeout = 10000 2017-04-11 08:28:37,091 INFO main-EventThread.process - Watcher org.apache.solr.common.cloud.ConnectionManager@18ef96 name:ZooKeeperConnection Watcher:dwh-mst-dev01.stor.nccourts.org:2181/solr,dwh-mst-dev02.stor.nccourts.org:2181/solr,dwh-mst-dev03.stor.nccourts.org:2181/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None 2017-04-11 08:28:37,093 INFO main.waitForConnected - Client is connected to ZooKeeper 2017-04-11 08:28:37,094 INFO main.createZkACLProvider - Using default ZkACLProvider 2017-04-11 08:28:37,132 DEBUG main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).readResponse - Reading reply sessionid:0x15ad7740272589b, packet:: clientPath:null serverPath:null finished:false header:: 1,3 replyHeader:: 1,55835836011,-101 request:: '/solr%2Cdwh-mst-dev02.stor.nccourts.org:2181/solr%2Cdwh-mst-dev03.stor.nccourts.org:2181/solr/clusterstate.json,F response:: 2017-04-11 08:28:37,136 INFO main.makePath - makePath: /clusterstate.json 2017-04-11 08:28:37,162 DEBUG main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).readResponse - Reading reply sessionid:0x15ad7740272589b, packet:: clientPath:null serverPath:null finished:false header:: 2,3 replyHeader:: 2,55835836011,-101 request:: '/solr%2Cdwh-mst-dev02.stor.nccourts.org:2181/solr%2Cdwh-mst-dev03.stor.nccourts.org:2181/solr/clusterstate.json,F response:: 2017-04-11 08:28:37,199 DEBUG main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).readResponse - Reading reply sessionid:0x15ad7740272589b, packet:: clientPath:null serverPath:null finished:false header:: 3,1 replyHeader:: 3,55835836012,-101 request:: '/solr%2Cdwh-mst-dev02.stor.nccourts.org:2181/solr%2Cdwh-mst-dev03.stor.nccourts.org:2181/solr/clusterstate.json,,v{s{31,s{'world,'anyone}}},0 response:: 2017-04-11 08:28:37,209 DEBUG main.close - Closing session: 0x15ad7740272589b 2017-04-11 08:28:37,210 DEBUG main.close - Closing client for session: 0x15ad7740272589b 2017-04-11 08:28:37,238 DEBUG main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).readResponse - Reading reply sessionid:0x15ad7740272589b, packet:: clientPath:null serverPath:null finished:false header:: 4,-11 replyHeader:: 4,55835836013,0 request:: null response:: null 2017-04-11 08:28:37,239 DEBUG main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).run - An exception was thrown while closing send thread for session 0x15ad7740272589b : Unable to read additional data from server sessionid 0x15ad7740272589b, likely server has closed socket 2017-04-11 08:28:37,240 DEBUG main.disconnect - Disconnecting client for session: 0x15ad7740272589b 2017-04-11 08:28:37,241 INFO main.close - Session: 0x15ad7740272589b closed 2017-04-11 08:28:37,241 INFO main-EventThread.run - EventThread shut down Exception in thread "main" org.apache.solr.common.cloud.ZooKeeperException: at org.apache.solr.client.solrj.impl.CloudSolrServer.connect(CloudSolrServer.java:270) at org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:548) at org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91) at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301) at org.nccourts.hadoop.solrj.SolrjQuery.getResults(SolrjQuery.java:34) at org.nccourts.hadoop.solrj.SolrjQuery.main(SolrjQuery.java:22) Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /clusterstate.json at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783) at org.apache.solr.common.cloud.SolrZkClient$10.execute(SolrZkClient.java:507) at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:504) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:461) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:448) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:435) at org.apache.solr.common.cloud.ZkCmdExecutor.ensureExists(ZkCmdExecutor.java:94) at org.apache.solr.common.cloud.ZkCmdExecutor.ensureExists(ZkCmdExecutor.java:84) at org.apache.solr.common.cloud.ZkStateReader.createClusterStateWatchersAndUpdate(ZkStateReader.java:295) at org.apache.solr.client.solrj.impl.CloudSolrServer.connect(CloudSolrServer.java:261) ... 5 more I am using Cloudera CDH 5.10. My program is very simple: --- public class SolrjQuery { public static void main(final String[] args) throws SolrServerException, IOException { final String zkHostString = "dwh-mst-dev01.stor.nccourts.org:2181/solr,dwh-mst-dev02.stor.nccourts.org:2181/solr,dwh-mst-dev03.stor.nccourts.org:2181/solr"; final CloudSolrServer solr = new CloudSolrServer(zkHostString); solr.setDefaultCollection("party_name"); final SolrDocumentList docs = SolrjQuery.getResults(solr); for (final SolrDocument doc : docs) { System.out.println(doc); } } static SolrDocumentList getResults(final CloudSolrServer server) { final SolrQuery query = new SolrQuery(); query.setQuery("*:*"); QueryResponse rsp = null; try { rsp = server.query(query); } catch (final SolrServerException e) { e.printStackTrace(); return null; } final SolrDocumentList docs = rsp.getResults(); return docs; } } ---
... View more
Labels:
- Labels:
-
Cloudera Search
04-11-2017
05:38 AM
I also received a similar exception when trying to query my collection: 2017-04-11 08:28:36,967 INFO main.waitForConnected - Waiting for client to connect to ZooKeeper 2017-04-11 08:28:37,004 INFO main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).logStartConnect - Opening socket connection to server dwh-mst-dev01.stor.nccourts.org/10.91.62.104:2181. Will not attempt to authenticate using SASL (unknown error) 2017-04-11 08:28:37,031 INFO main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).primeConnection - Socket connection established to dwh-mst-dev01.stor.nccourts.org/10.91.62.104:2181, initiating session 2017-04-11 08:28:37,035 DEBUG main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).primeConnection - Session establishment request sent on dwh-mst-dev01.stor.nccourts.org/10.91.62.104:2181 2017-04-11 08:28:37,074 INFO main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).onConnected - Session establishment complete on server dwh-mst-dev01.stor.nccourts.org/10.91.62.104:2181, sessionid = 0x15ad7740272589b, negotiated timeout = 10000 2017-04-11 08:28:37,091 INFO main-EventThread.process - Watcher org.apache.solr.common.cloud.ConnectionManager@18ef96 name:ZooKeeperConnection Watcher:dwh-mst-dev01.stor.nccourts.org:2181/solr,dwh-mst-dev02.stor.nccourts.org:2181/solr,dwh-mst-dev03.stor.nccourts.org:2181/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None 2017-04-11 08:28:37,093 INFO main.waitForConnected - Client is connected to ZooKeeper 2017-04-11 08:28:37,094 INFO main.createZkACLProvider - Using default ZkACLProvider 2017-04-11 08:28:37,132 DEBUG main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).readResponse - Reading reply sessionid:0x15ad7740272589b, packet:: clientPath:null serverPath:null finished:false header:: 1,3 replyHeader:: 1,55835836011,-101 request:: '/solr%2Cdwh-mst-dev02.stor.nccourts.org:2181/solr%2Cdwh-mst-dev03.stor.nccourts.org:2181/solr/clusterstate.json,F response:: 2017-04-11 08:28:37,136 INFO main.makePath - makePath: /clusterstate.json 2017-04-11 08:28:37,162 DEBUG main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).readResponse - Reading reply sessionid:0x15ad7740272589b, packet:: clientPath:null serverPath:null finished:false header:: 2,3 replyHeader:: 2,55835836011,-101 request:: '/solr%2Cdwh-mst-dev02.stor.nccourts.org:2181/solr%2Cdwh-mst-dev03.stor.nccourts.org:2181/solr/clusterstate.json,F response:: 2017-04-11 08:28:37,199 DEBUG main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).readResponse - Reading reply sessionid:0x15ad7740272589b, packet:: clientPath:null serverPath:null finished:false header:: 3,1 replyHeader:: 3,55835836012,-101 request:: '/solr%2Cdwh-mst-dev02.stor.nccourts.org:2181/solr%2Cdwh-mst-dev03.stor.nccourts.org:2181/solr/clusterstate.json,,v{s{31,s{'world,'anyone}}},0 response:: 2017-04-11 08:28:37,209 DEBUG main.close - Closing session: 0x15ad7740272589b 2017-04-11 08:28:37,210 DEBUG main.close - Closing client for session: 0x15ad7740272589b 2017-04-11 08:28:37,238 DEBUG main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).readResponse - Reading reply sessionid:0x15ad7740272589b, packet:: clientPath:null serverPath:null finished:false header:: 4,-11 replyHeader:: 4,55835836013,0 request:: null response:: null 2017-04-11 08:28:37,239 DEBUG main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).run - An exception was thrown while closing send thread for session 0x15ad7740272589b : Unable to read additional data from server sessionid 0x15ad7740272589b, likely server has closed socket 2017-04-11 08:28:37,240 DEBUG main.disconnect - Disconnecting client for session: 0x15ad7740272589b 2017-04-11 08:28:37,241 INFO main.close - Session: 0x15ad7740272589b closed 2017-04-11 08:28:37,241 INFO main-EventThread.run - EventThread shut down Exception in thread "main" org.apache.solr.common.cloud.ZooKeeperException: at org.apache.solr.client.solrj.impl.CloudSolrServer.connect(CloudSolrServer.java:270) at org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:548) at org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91) at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301) at org.nccourts.hadoop.solrj.SolrjQuery.getResults(SolrjQuery.java:34) at org.nccourts.hadoop.solrj.SolrjQuery.main(SolrjQuery.java:22) Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /clusterstate.json at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783) at org.apache.solr.common.cloud.SolrZkClient$10.execute(SolrZkClient.java:507) at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:504) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:461) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:448) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:435) at org.apache.solr.common.cloud.ZkCmdExecutor.ensureExists(ZkCmdExecutor.java:94) at org.apache.solr.common.cloud.ZkCmdExecutor.ensureExists(ZkCmdExecutor.java:84) at org.apache.solr.common.cloud.ZkStateReader.createClusterStateWatchersAndUpdate(ZkStateReader.java:295) at org.apache.solr.client.solrj.impl.CloudSolrServer.connect(CloudSolrServer.java:261) ... 5 more My program is very simple: --- public class SolrjQuery { public static void main(final String[] args) throws SolrServerException, IOException { final String zkHostString = "dwh-mst-dev01.stor.nccourts.org:2181/solr,dwh-mst-dev02.stor.nccourts.org:2181/solr,dwh-mst-dev03.stor.nccourts.org:2181/solr"; final CloudSolrServer solr = new CloudSolrServer(zkHostString); solr.setDefaultCollection("party_name"); final SolrDocumentList docs = SolrjQuery.getResults(solr); for (final SolrDocument doc : docs) { System.out.println(doc); } } static SolrDocumentList getResults(final CloudSolrServer server) { final SolrQuery query = new SolrQuery(); query.setQuery("*:*"); QueryResponse rsp = null; try { rsp = server.query(query); } catch (final SolrServerException e) { e.printStackTrace(); return null; } final SolrDocumentList docs = rsp.getResults(); return docs; } } ---
... View more
04-11-2017
04:35 AM
My post includes my entire code. This line of code: if (zk != null) zk.close(); is not in my code. Thats where the error is happening. I happend to know that because I downloaded the source code. This is my code: ------ public static void main(final String[] args) throws SolrServerException, IOException { final String zkHostString = "dwh-mst-dev01.stor.nccourts.org:2181/solr,dwh-mst-dev02.stor.nccourts.org:2181/solr,dwh-mst-dev03.stor.nccourts.org:2181/solr"; final CloudSolrServer solr = new CloudSolrServer(zkHostString); final UpdateRequest request = new UpdateRequest(); request.setAction(UpdateRequest.ACTION.COMMIT, true, true); request.setParam("collection", "party_name"); final SolrInputDocument doc = new SolrInputDocument(); final List<String> records = SolrJPopulater.loadSampleData(); for (final String record : records) { final String[] fields = record.split(","); doc.addField("id", fields[0]); doc.addField("county", fields[1]); doc.addField("year", Integer.parseInt(fields[2])); doc.addField("court_type", fields[3]); doc.addField("seq_num", Integer.parseInt(fields[4])); doc.addField("party_role", fields[5]); doc.addField("party_num", Integer.parseInt(fields[6])); doc.addField("party_status", fields[7]); doc.addField("biz_name", fields[8]); doc.addField("prefix", fields[9]); doc.addField("last_name", fields[10]); doc.addField("first_name", fields[11]); doc.addField("middle_name", fields[12]); doc.addField("suffix", fields[13]); doc.addField("in_regards_to", fields[14]); doc.addField("case_status", fields[15]); doc.addField("row_of_origin", fields[16]); final UpdateResponse response = solr.add(doc); System.out.println("status code=" + response.getStatus()); } solr.commit(); }
... View more
04-10-2017
08:29 AM
1 Kudo
After increasing the memory, I was able to successfully index my data. Thanks.
... View more