-
Notifications
You must be signed in to change notification settings - Fork 340
Fix filesystem #2291
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Fix filesystem #2291
Conversation
…pecified in file path
pyiceberg/io/pyarrow.py
Outdated
return uri.scheme, uri.netloc, uri.path | ||
|
||
# Load defaults from environment | ||
default_scheme = os.getenv("DEFAULT_SCHEME", "file") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we use central config instead of direct usage of env variables? pyiceberg/utils/config.py
this would enable configuration via file OR env variables, which is how most other configs are documented and exposed to catalog construction.
thanks Tom! overall excited for this PR because I've had to hack around this with an overriden PyArrowFileIO e.g. class HDFSFileIO(PyArrowFileIO):
"""Simple PyArrowFileIO that defaults paths without scheme to HDFS"""
@override
def new_input(self, location: str) -> PyArrowFile:
"""Fix paths without scheme to use HDFS"""
if not urlparse(location).scheme and location.startswith('/'):
hdfs_host = self.properties.get(HDFS_HOST, 'localhost')
hdfs_port = self.properties.get(HDFS_PORT, '9000')
location = f'hdfs://{hdfs_host}:{hdfs_port}{location}'
return super().new_input(location) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR! I added a comment about passing the properties and structuring the specific code for hdfs
default_scheme = config.get_str("default-scheme") or "file" | ||
default_netloc = config.get_str("default-netloc") or "" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: i think its better to pass these in through the properties field
https://py.iceberg.apache.org/configuration/#hdfs
we can get the env variable and then pass into the properties.
# Apply logic | ||
scheme = uri.scheme or default_scheme | ||
netloc = uri.netloc or default_netloc | ||
|
||
if scheme in ("hdfs", "viewfs"): | ||
return scheme, netloc, uri.path | ||
else: | ||
return uri.scheme, uri.netloc, f"{uri.netloc}{uri.path}" | ||
# For non-HDFS URIs, include netloc in the path if present | ||
path = uri.path if uri.scheme else os.path.abspath(location) | ||
if netloc and not path.startswith(netloc): | ||
path = f"{netloc}{path}" | ||
return scheme, netloc, path |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i actually really want to get rid of this if {scheme} logic here.
Is there a way to refactor these changes down to the _initialize_hdfs_fs
? so we can keep the hdfs logic in the same place?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i don't see a nice way to do this since the path used in the pyarrowfile is actually different in the different cases, i tried to see if we could use the same path with netloc in it for hdfs but it doesn't seem to work
#2291 (comment)
this shows that setting the netloc on filesystem creation and having it in the path (as is done for the other fs types) doesn't work for hdfs
|
What is the proper way to address an absolute path in HadoopFileSystem? your example shows that Im trying to see what the requirements are here. I only found examples with Also im curious if |
Rationale for this change
For hdfs it's common to get scheme and netloc from config and have paths be just the uri. Add environment variables to support this case.
example
tmccormi@ltx1-hcl14866 [ ~/python ]$ export DEFAULT_SCHEME=hdfs
tmccormi@ltx1-hcl14866 [ ~/python ]$ export DEFAULT_NETLOC=ltx1-yugioh-cluster01.linkfs.prod-ltx1.atd.prod.linkedin.com:9000
Are these changes tested?
Tested in test env at linkedin and with unit tests
Are there any user-facing changes?
No user facing changes by default. If you add these env variables, if file path doesn't have scheme/netloc it'll use the defaults specified.