Created on 08-25-2021 08:51 PM - edited on 09-08-2021 08:31 PM by subratadas
In this tutorial, we will learn how to create Apache Ozone volumes, buckets, and keys. After that, we will see how we can access Apache Ozone data in Apache Spark.
# ozone sh volume create /vol1
21/08/25 06:23:27 INFO rpc.RpcClient: Creating Volume: vol1, with root as owner.
# ozone sh bucket create /vol1/bucket1
21/08/25 06:24:09 INFO rpc.RpcClient: Creating Bucket: vol1/bucket1, with Versioning false and Storage Type set to DISK and Encryption set to false
# vi /tmp/employee.csv
id,name,age
1,Ranga,33
2,Nishanth,4
3,Raja,60
# ozone sh key put /vol1/bucket1/employee.csv /tmp/employee.csv
<property>
<name>fs.o3fs.impl</name>
<value>org.apache.hadoop.fs.ozone.OzoneFileSystem</value>
</property>
hdfs dfs -ls o3fs://bucket1.vol1.om-host.example.com/
spark spark-shell
scala> val omHost="om.host.example.com"
scala> val df=spark.read.option("header", "true").option("inferSchema", "true").csv(s"o3fs://bucket1.vol1.${omHost}/employee.csv")
scala> df.show()
+---+--------+---+
| id| name|age|
+---+--------+---+
| 1| Ranga| 33|
| 2|Nishanth| 4|
| 3| Raja| 60|
+---+--------+---+
spark-shell \
--keytab ${KEY_TAB} \
--principal ${PRINCIPAL} \
--conf spark.yarn.access.hadoopFileSystems=o3fs://bucket1.vol1.om.host.example.com:9862
Note: In a Kerberized environment, mandatorily, we need to specify the spark.yarn.access.hadoopFileSystems configuration, otherwise, it will display the following error:java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
scala> val omHost="om.host.example.com"
scala> val df=spark.read.option("header", "true").option("inferSchema", "true").csv(s"o3fs://bucket1.vol1.${omHost}/employee.csv")
scala> df.show()
+---+--------+---+
| id| name|age|
+---+--------+---+
| 1| Ranga| 33|
| 2|Nishanth| 4|
| 3| Raja| 60|
+---+--------+---+
scala> val age30DF = df.filter(df("age") > 30)
scala> val outputPath = s"o3fs://bucket1.vol1.${omHost}/employee_age30.csv"
scala> age30DF.write.option("header", "true").mode("overwrite").csv(outputPath)
scala> val df2=spark.read.option("header", "true").option("inferSchema", "true").csv(outputPath)
scala> df2.show()
+---+-----+---+
| id| name|age|
+---+-----+---+
| 1|Ranga| 33|
| 3| Raja| 60|
+---+-----+---+
Note: If you get the java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.ozone.OzoneFileSystem not found, add the /opt/cloudera/parcels/CDH/jars/hadoop-ozone-filesystem-hadoop3-*.jar to spark class path using --jars option.
Thanks for reading this article. If you liked this article, you can give kudos.
Created on 07-27-2022 01:32 AM - edited 07-27-2022 01:35 AM
Hi,
I am looking for solution and help regarding Apache Ozone. I am facing issue while installing apache ozone in my CDP PVC base 7.1.6.
steps that I followed :-
1) add service
2) click on Ozone service
3) Give ozone.service.id
4) I have choose all dependencies including ranger, HDFS.
I have checked logs of Ozone manager and Data Nodes, So on data nodes and ozone manager machine the exception is same i.e.,
log4j:ERROR Could not instantiate class [org.cloudera.log4j.redactor.RedactorAppender]. java.lang.ClassNotFoundException: org.cloudera.log4j.redactor.RedactorAppender
Please let me know if there's any solution available.
Thank you,
Parimal
Created on 01-23-2024 07:10 AM
Hi @parimalpatil
The RedactorAppender is mostly you can ignore it is nothing to do with real failure unless the stacktraces at bottom points something related to any ozone roles. This Log4j Appender redacts log messages using redaction rules before delegating to other Appenders. You can share the complete failure log so that we can check and update you.
The workaround is add jar file in classpath of roles where you see RedactorAppender error.
We can add this through CM UI -> Configuration-> Search "role_env_safety_valve" for the role you are getting error.
OZONE_CLASSPATH=$OZONE_CLASSPATH:/opt/cloudera/parcels/CDH/jars/logredactor-2.0.8.jar