<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Oryx log info of ALS in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/27854#M6098</link>
    <description>&lt;P&gt;Sean,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Want to know a little more about Oryx logs as below (ALS computation).&lt;/P&gt;&lt;P&gt;In particular, what's the heap&amp;nbsp;number ? Is it implying the MEM used by Oryx&amp;nbsp;computation layer during the model computation time ?&lt;/P&gt;&lt;P&gt;Sometimes, we see the number is not close to the heap initialized to Oryx, but it signals a warning.&amp;nbsp; So, want to confirm what's the heap number shown below.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks.&lt;/P&gt;&lt;P&gt;Jason&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;Sat May 23 08:57:48 PDT 2015 INFO 5800000 X/tag rows computed (7876MB heap)
Sat May 23 08:57:50 PDT 2015 INFO 5900000 X/tag rows computed (10487MB heap)
Sat May 23 08:57:53 PDT 2015 INFO 6000000 X/tag rows computed (7108MB heap)&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Sat, 23 May 2015 16:13:43 GMT</pubDate>
    <dc:creator>Jason.Chen</dc:creator>
    <dc:date>2015-05-23T16:13:43Z</dc:date>
    <item>
      <title>Oryx log info of ALS</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/27854#M6098</link>
      <description>&lt;P&gt;Sean,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Want to know a little more about Oryx logs as below (ALS computation).&lt;/P&gt;&lt;P&gt;In particular, what's the heap&amp;nbsp;number ? Is it implying the MEM used by Oryx&amp;nbsp;computation layer during the model computation time ?&lt;/P&gt;&lt;P&gt;Sometimes, we see the number is not close to the heap initialized to Oryx, but it signals a warning.&amp;nbsp; So, want to confirm what's the heap number shown below.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks.&lt;/P&gt;&lt;P&gt;Jason&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;Sat May 23 08:57:48 PDT 2015 INFO 5800000 X/tag rows computed (7876MB heap)
Sat May 23 08:57:50 PDT 2015 INFO 5900000 X/tag rows computed (10487MB heap)
Sat May 23 08:57:53 PDT 2015 INFO 6000000 X/tag rows computed (7108MB heap)&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 23 May 2015 16:13:43 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/27854#M6098</guid>
      <dc:creator>Jason.Chen</dc:creator>
      <dc:date>2015-05-23T16:13:43Z</dc:date>
    </item>
    <item>
      <title>Re: Oryx log info of ALS</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/27860#M6099</link>
      <description>Yes it is just the current heap usage, which is probably not near the max you set. It is normal. What warning do you mean?</description>
      <pubDate>Sat, 23 May 2015 17:39:17 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/27860#M6099</guid>
      <dc:creator>srowen</dc:creator>
      <dc:date>2015-05-23T17:39:17Z</dc:date>
    </item>
    <item>
      <title>Re: Oryx log info of ALS</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/27868#M6100</link>
      <description>&lt;P&gt;I set the heap size as 18GB.&lt;/P&gt;&lt;P&gt;During the ALS computation time, the logs indicates the following MEM warning message. Looks it's because of heap size. One thing confusing is that it indicates 19244MB heap used. If the report is correct, it should drop Out-Of-Memory exception (because my heap size is 18GB which is smaller than 19244 MB). I feel this is confusing.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Jason&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;Sat May 23 15:36:34 PDT 2015 INFO 3800000 X/tag rows computed (19244MB heap)
Sat May 23 15:36:34 PDT 2015 WARNING Memory is low. Increase heap size with -Xmx, decrease new generation size with larger -XX:NewRatio value, and/or use -XX:+UseCompressedOops&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 24 May 2015 06:36:49 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/27868#M6100</guid>
      <dc:creator>Jason.Chen</dc:creator>
      <dc:date>2015-05-24T06:36:49Z</dc:date>
    </item>
    <item>
      <title>Re: Oryx log info of ALS</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/27872#M6101</link>
      <description>If you're not seeing a problem you can ignore it. The thing I'd watch for is if you are nearly out of memory and are spending a lot of time in GC. If so then more heap or these other settings might help.&lt;BR /&gt;&lt;BR /&gt;Are you sure the heap is just 18gb? I agree this doesnt quite make sense otherwise. The memory estimate is just that but shouldn't ever be more than the heap total</description>
      <pubDate>Sun, 24 May 2015 11:30:29 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/27872#M6101</guid>
      <dc:creator>srowen</dc:creator>
      <dc:date>2015-05-24T11:30:29Z</dc:date>
    </item>
    <item>
      <title>Re: Oryx log info of ALS</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/27876#M6102</link>
      <description>&lt;P&gt;Sean,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for your reply.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;(1) Yes, heap size is set to 18G... Here is what I do (for Oryx ALS computation)&lt;/P&gt;&lt;P&gt;java -Xmx18432m -Dconfig.file=/xxx/oryx.conf -jar /xxx/oryx-computation.jar&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;(2) A side question: in the Oryx configuration file (&lt;A target="_blank" href="https://github.com/cloudera/oryx/blob/master/common/src/main/resources/reference.conf),"&gt;https://github.com/cloudera/oryx/blob/master/common/src/main/resources/reference.conf),&lt;/A&gt; there are several settings for the computation that I can tune/set from the oryx configuration file.&lt;/P&gt;&lt;P&gt;I think I can just put the settings as part of the Java JVM parameters, and it should replace with the default values inside config file. Confirm?&lt;/P&gt;&lt;P&gt;For example (use model.features and model.alpha as examples)&lt;/P&gt;&lt;P&gt;java -Xmx18432m -Dconfig.file=/xxx/oryx.conf -jar -Dmodel.features=50 -Dmodel.alpha=50 /xxx/oryx-computation.jar&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Jason&lt;/P&gt;</description>
      <pubDate>Sun, 24 May 2015 16:22:36 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/27876#M6102</guid>
      <dc:creator>Jason.Chen</dc:creator>
      <dc:date>2015-05-24T16:22:36Z</dc:date>
    </item>
    <item>
      <title>Re: Oryx log info of ALS</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/27881#M6103</link>
      <description>&lt;P&gt;Yes it uses Typesafe Config (&lt;A target="_blank" href="https://github.com/typesafehub/config)"&gt;https://github.com/typesafehub/config)&lt;/A&gt; so you should be able to set values on the command line too. Hm, maybe I should change that log to also output the current max heap, if only to be more informative and help debug. I'm not sure why you are seeing that.&lt;/P&gt;</description>
      <pubDate>Sun, 24 May 2015 19:54:27 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/27881#M6103</guid>
      <dc:creator>srowen</dc:creator>
      <dc:date>2015-05-24T19:54:27Z</dc:date>
    </item>
    <item>
      <title>Re: Oryx log info of ALS</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/27884#M6104</link>
      <description>&lt;P&gt;Sean,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I&amp;nbsp;continued to dig into the memory usage, but moved focus to oryx-serving layer.&lt;/P&gt;&lt;P&gt;I noticed from oryx serving log. It indicates the loading of several main objects:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;(1) Loaded feature vectors from .../Y/0.csv.gz (this is for item matrix)&lt;/P&gt;&lt;P&gt;(2) Loaded known items from .../knownItems/0.csv.gz&lt;/P&gt;&lt;P&gt;(3) Loaded feature vectors from .../X/0.csv.gz (this is for user matrix)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Based on these loaded objects: I am thinking to compute the MEM used by the model when Oryx serving starts..Assume using feature ranking as 50&lt;/P&gt;&lt;P&gt;(1) (# of items) * 50 * 4 bytes (because each feature vector is with 50 floating points (4 bytes in Java))&lt;/P&gt;&lt;P&gt;(2) MEM requires is the "long" (8 bytes) of user-ID and plus (8 bytes)* (# of known item for each user)&lt;/P&gt;&lt;P&gt;(3) (# of users) * 50 * 4 bytes (because each feature vector is with 50 floating points (4 bytes in Java))&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The MEM usage is basically (1) + (2) + (3)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Do I miss any important MEM&amp;nbsp;computation for the Oryx-serving to load ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 09 Jun 2015 02:19:19 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/27884#M6104</guid>
      <dc:creator>Jason.Chen</dc:creator>
      <dc:date>2015-06-09T02:19:19Z</dc:date>
    </item>
    <item>
      <title>Re: Oryx log info of ALS</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/27886#M6105</link>
      <description>&lt;P&gt;That's right, though there's probably a little more than this due to other JVM overheads and other much smaller data structures, but yeah that's a good start at an estimate.&lt;/P&gt;</description>
      <pubDate>Mon, 25 May 2015 07:07:22 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/27886#M6105</guid>
      <dc:creator>srowen</dc:creator>
      <dc:date>2015-05-25T07:07:22Z</dc:date>
    </item>
    <item>
      <title>Re: Oryx log info of ALS</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/27898#M6106</link>
      <description>&lt;P&gt;Sean,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for the confirmation.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Yes, I understand that there could be more, due to JVM stuff, stack, the code/structures, etc.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I noticed that our oryx-serving uses 9GB MEM after starting up, but the data seems to needs about 1.8GB MEM including&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Based on this, I do not understand why there is a big difference from 9GB and 1.8GB... Is my estimation wrong? Any thought ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Jason&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 09 Jun 2015 02:21:02 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/27898#M6106</guid>
      <dc:creator>Jason.Chen</dc:creator>
      <dc:date>2015-06-09T02:21:02Z</dc:date>
    </item>
    <item>
      <title>Re: Oryx log info of ALS</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/27899#M6107</link>
      <description>&lt;P&gt;If you just mean the heap has grown to 9GB, that is normal in the sense that it does not mean 9GB of memory is actually in use. If you have an 18GB heap then a major GC has likely not happened since there is no memory pressure. I would expect this to drop significantly after a major GC. To test, you can force a GC on the running process with "jcmd GC.run" in Java 7+.&lt;/P&gt;</description>
      <pubDate>Mon, 25 May 2015 17:19:29 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/27899#M6107</guid>
      <dc:creator>srowen</dc:creator>
      <dc:date>2015-05-25T17:19:29Z</dc:date>
    </item>
    <item>
      <title>Re: Oryx log info of ALS</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/28012#M6108</link>
      <description>&lt;P&gt;Sean,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Yes, we tried GC and it helped to identify the MEM usage.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We also try to&amp;nbsp;investigate the MEM used by Oryx computation and use Hadoop for model computation.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;(1) How to&amp;nbsp;compute those more precisely to know the MEM needed ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;(2) We also ran Oryx in Hadoop and it runs very slow.&lt;/P&gt;&lt;P&gt;Good thing about Hadoop is that it can avoid the Out-of-MEM, but we do want to address the slow&lt;/P&gt;&lt;P&gt;computation of Hadoop. So, my question is that if any suggestions to tune Hadoop stuff in Oryx config&lt;/P&gt;&lt;P&gt;(say, mapper-memory-mb, reducer-memory-mb ?).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;(3) We heard that Oryx 2.0 is using Spark and has built-in train-validation process. It looks will help to address the issues&lt;/P&gt;&lt;P&gt;I mentioned in (2) ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for your time.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Jason&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 09 Jun 2015 02:24:28 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/28012#M6108</guid>
      <dc:creator>Jason.Chen</dc:creator>
      <dc:date>2015-06-09T02:24:28Z</dc:date>
    </item>
    <item>
      <title>Re: Oryx log info of ALS</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/28020#M6109</link>
      <description>&lt;P&gt;You are computing locally rather than on Hadoop right? I don't think there's an easy way to compute memory usage as it will vary somewhat with your parallelism as well as data size. I believe it will require one matrix loaded into memory locally, and that will drive most of the memory usage, and you have an estimate of that. That may help, but, I'd also just measure empirically the heap size to know for sure. You can easily watch the JVM's GC activity with a tool like jprofiler in real time, if you really want to see what's happening. There's no point in using Hadoop if you're just going to run on one machine. It will be an order of magnitude slower as there is a bunch of pointless writes to disk and all the overhead of a full distributed file system and resource scheduler. Hadoop makes sense only if you have a large cluster already, or you need fault tolerance. It sounds like you should simply get a decent estimate of your heap size requirements, which don't sound that large. It sounds like it's well under 9GB? you can easily get a machine in the cloud with tens of GB of RAM. Just do that. Oryx 2 is a completely different architecture. There is no local mode; it's all Hadoop (and Spark). It has a lot of pluses and minuses as a result. I think it would be even worse if you're trying to run on one small machine; it's really for a small cluster at least.&lt;/P&gt;</description>
      <pubDate>Thu, 28 May 2015 08:48:00 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/28020#M6109</guid>
      <dc:creator>srowen</dc:creator>
      <dc:date>2015-05-28T08:48:00Z</dc:date>
    </item>
    <item>
      <title>Re: Oryx log info of ALS</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/28071#M6110</link>
      <description>&lt;P&gt;Sean,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We are experimenting to use (a) single Computation node and (b) single Computation node plus a Hadoop cluster.&lt;/P&gt;&lt;P&gt;We want to see the performance difference in terms of running time for (a) and (b)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Questions:&lt;/P&gt;&lt;P&gt;(1) What do you mean "There's no point in using Hadoop if you're just going to run on one machine." ? Our data will grow up fast and&lt;/P&gt;&lt;P&gt;then we can not just use one VM (and&amp;nbsp;continuously increasing memory). We think Hadoop MapReduce can help us to scale up when data&lt;/P&gt;&lt;P&gt;grows.&lt;/P&gt;&lt;P&gt;(2) Is tuning "&lt;SPAN&gt;mapper-memory-mb&lt;/SPAN&gt;" and "&lt;SPAN&gt;reducer-memory-mb&lt;/SPAN&gt;" potentially the way to "speed up" the process, as it allocates more MEM ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Jason&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 09 Jun 2015 02:26:33 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/28071#M6110</guid>
      <dc:creator>Jason.Chen</dc:creator>
      <dc:date>2015-06-09T02:26:33Z</dc:date>
    </item>
    <item>
      <title>Re: Oryx log info of ALS</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/28072#M6111</link>
      <description>&lt;P&gt;Yes, that's a good reason, if you have to scale up past one machine. Previously I thought you mean you were running an entire Hadoop cluster on one machine, which is fine for a test but much slower and more complex than a simple non-Hadoop 1-machine setup. I The mapper and reducer will need more memory if you see them running out of memory. If memory is very low but not exhausted, a Java process slows down in too much GC. Otherwise more memory does not help. More nodes does not necessarily help. You still face the overhead of task scheduling and data transfer, and the time taken to do non-distributed work. In fact, if you set up your workers to not live on the same nodes as data nodes, it will be a lot slower. For your scale, which fits in one machine easily, 7 nodes is big overkill, and 60 is way too big to provide any advantage. You're measuring pure Hadoop overhead, which you can tune, but is not reflecting work done. The upshot is you should be able to handle data sizes hundreds or thousands of times larger this way, at roughly the same amount of time. For small data sets, you see why there is no value in trying to use a large cluster; it's just too tiny to split up.&lt;/P&gt;</description>
      <pubDate>Fri, 29 May 2015 07:50:44 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oryx-log-info-of-ALS/m-p/28072#M6111</guid>
      <dc:creator>srowen</dc:creator>
      <dc:date>2015-05-29T07:50:44Z</dc:date>
    </item>
  </channel>
</rss>

