# Apache Hadoop 3.4.1 Release Notes These release notes cover new developer and user-facing incompatibilities, important issues, features, and major improvements. --- * [HADOOP-18830](https://issues.apache.org/jira/browse/HADOOP-18830) | *Major* | **S3A: Cut S3 Select** S3 Select is no longer supported through the S3A connector --- * [HADOOP-18993](https://issues.apache.org/jira/browse/HADOOP-18993) | *Minor* | **S3A: Add option fs.s3a.classloader.isolation (#6301)** If the user wants to load custom implementations of AWS Credential Providers through user provided jars can set {{fs.s3a.extensions.isolated.classloader}} to {{false}}. --- * [HADOOP-19084](https://issues.apache.org/jira/browse/HADOOP-19084) | *Blocker* | **prune dependency exports of hadoop-\* modules** maven/ivy imports of hadoop-common are less likely to end up with log4j versions on their classpath. --- * [HADOOP-19101](https://issues.apache.org/jira/browse/HADOOP-19101) | *Blocker* | **Vectored Read into off-heap buffer broken in fallback implementation** PositionedReadable.readVectored() will read incorrect data when reading from hdfs, azure abfs and other stores when given a direct buffer allocator. For cross-version compatibility, use on-heap buffer allocators only --- * [HADOOP-19120](https://issues.apache.org/jira/browse/HADOOP-19120) | *Major* | **[ABFS]: ApacheHttpClient adaptation as network library** Apache httpclient 4.5.x is a new implementation of http connections; this supports a large configurable pool of connections along with the ability to limit their lifespan. The networking library can be chosen using the configuration option fs.azure.networking.library The supported values are - JDK\_HTTP\_URL\_CONNECTION : Use JDK networking library [Default] - APACHE\_HTTP\_CLIENT : Use Apache HttpClient Important: when the networking library is switched to the Apache http client, the apache httpcore and httpclient must be on the classpath. --- * [HADOOP-19221](https://issues.apache.org/jira/browse/HADOOP-19221) | *Major* | **S3A: Unable to recover from failure of multipart block upload attempt "Status Code: 400; Error Code: RequestTimeout"** S3A upload operations can now recover from failures where the store returns a 500 error. There is an option to control whether or not the S3A client itself attempts to retry on a 50x error other than 503 throttling events (which are independently processed as before). Option: fs.s3a.retry.http.5xx.errors . Default: true