Support Questions

Find answers, ask questions, and share your expertise

Hue S3 access to HGST object storage

New Contributor

I'm getting the following error in the Hue S3 file browser:

Failed to initialize bucket cache: local variable 'region_name' referenced before assignment

 

With my HDFS S3 configuration, I'm able to access S3A with hadoop fs -ls s3a://hadoop/, where hadoop is the bucket name I've created on my HGST ActiveScale system.

 

I have tried adding a region=us-east-1 and region= to the huesafety_valve.ini, but I get the same error.

 

I'm running the Cloudera QuickStart vm 5.12.0 in virtualbox.  I added the S3 Connector and configuration in below for Hue and HDFS.

 

Log:

[08/Jan/2018 12:24:18 -0800] middleware INFO Processing exception: S3 filesystem exception.: Traceback (most recent call last):
File "/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/core/handlers/base.py", line 112, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/db/transaction.py", line 371, in inner
return func(*args, **kwargs)
File "/usr/lib/hue/apps/filebrowser/src/filebrowser/views.py", line 206, in view
raise PopupException(msg, detail=e)
PopupException: S3 filesystem exception.

[08/Jan/2018 12:24:18 -0800] exceptions_renderable ERROR Potential trace: [('/usr/lib/hue/apps/filebrowser/src/filebrowser/views.py', 195, 'view', 'return listdir_paged(request, path)'), ('/usr/lib/hue/apps/filebrowser/src/filebrowser/views.py', 429, 'listdir_paged', 'all_stats = request.fs.do_as_user(do_as, request.fs.listdir_stats, path)'), ('/usr/lib/hue/desktop/core/src/desktop/lib/fs/proxyfs.py', 102, 'do_as_user', 'return fn(*args, **kwargs)'), ('/usr/lib/hue/desktop/core/src/desktop/lib/fs/proxyfs.py', 121, 'listdir_stats', 'return self._get_fs(path).listdir_stats(path, **kwargs)'), ('/usr/lib/hue/desktop/libs/aws/src/aws/s3/__init__.py', 52, 'wrapped', 'return fn(*args, **kwargs)'), ('/usr/lib/hue/desktop/libs/aws/src/aws/s3/s3fs.py', 274, 'listdir_stats', 'self._init_bucket_cache()'), ('/usr/lib/hue/desktop/libs/aws/src/aws/s3/s3fs.py', 89, '_init_bucket_cache', "raise S3FileSystemException(_('Failed to initialize bucket cache: %s') % e)")]

[08/Jan/2018 12:24:18 -0800] exceptions_renderable ERROR Potential detail: Failed to initialize bucket cache: local variable 'region_name' referenced before assignment

[08/Jan/2018 12:24:18 -0800] access INFO 10.0.2.15 cloudera - "GET /filebrowser/view=S3A:// HTTP/1.1"

 

Hue_safety_vavle.ini:

[aws]
[[aws_accounts]]
[[[default]]]
access_key_id=s3account
secret_access_key=s3accountpass
allow_environment_credentials=false
host=10.1.1.5
proxy_address=10.1.1.5
proxy_port=80
is_secure=false
calling_format=boto.s3.connection.OrdinaryCallingFormat

 

HDFS core-site.xml safety valve:

<property><name>fs.s3a.access.key</name><value>s3account</value><final>true</final><description>your access key - username</description></property><property><name>fs.s3a.secret.key</name><value>s3accountpass</value><final>true</final><description>your secret key - password</description></property><property><name>fs.s3a.endpoint</name><value>10.1.1.5</value><final>true</final><description>address of your ActiveScale endpoint</description></property><property><name>fs.s3a.proxy.host</name><value>10.1.1.5</value><final>true</final><description>address of your ActiveScale proxy</description></property><property><name>fs.s3a.proxy.port</name><value>80</value><final>true</final><description>TCP port ActiveScale proxy</description></property><property><name>fs.s3a.connection.ssl.enabled</name><value>False</value><final>true</final><description>disable SSL</description></property><property><name>fs.s3a.signing-algorithm</name><value>S3SignerType</value><final>true</final><description>Use legacy v2 signatures.</description></property><property><name>fs.s3a.fast.upload</name><value>true</value><final>true</final><description>Use the fast upload mechanism, which is capable of uploading from memory.</description></property><property><name>fs.s3a.fast.buffer.size</name><value>67108864</value><final>true</final><description>Size (in bytes) of initial memory buffer allocated for an upload. No effect if fs.s3a.fast.upload is false.</description></property><property><name>fs.s3a.threads.max</name><value>2</value><final>true</final><description>Maximum number of concurrent active (part) uploads which each use a thread from the threadpool.</description></property><property><name>fs.s3a.max.total.tasks</name><value>2</value><final>true</final><description>Number of (part) uploads allowed to the queue before blocking additional uploads.</description></property><property><name>fs.s3a.multipart.size</name><value>67108864</value><final>true</final><description>How big (in bytes) to split upload or copy operations up into.</description></property><property><name>fs.s3a.multipart.threshold</name><value>67108864</value><final>true</final><description>Threshold before uploads or copies use parallel multipart operations.</description></property><property><name>fs.s3a.fast.upload.buffer</name><value>bytebuffer</value><final>true</final><description>Cache blocks in memory instead of the local /tmp directory.</description></property><property><name>fs.s3a.fast.upload.active.blocks</name><value>2</value><final>true</final><description>Number of concurrent block uploads.</description></property>

 

1 REPLY 1

Master Guru
Hue relies on Boto S3 library to work. Are you able to get a usable connection on Boto against your HGST S3-like service?

Per the code, the region configuration is used iff you specify no hostname, such that the hostname for the region is determined dynamically (for AWS S3). Boto appears to have an issue if a hostname is provided, but it cannot determine its region (because the hostname in your context is not truly an AWS S3 service hostname): https://github.com/boto/boto/issues/3377

We do not test Hue's S3 browser with S3-like services other than AWS S3. Please feel free to log a bug report on https://issues.cloudera.org/browse/HUE detailing this need.