VYPR
High severity8.2NVD Advisory· Published Apr 22, 2026· Updated Apr 27, 2026

CVE-2026-41145

CVE-2026-41145

Description

MinIO is a high-performance object storage system. Starting in RELEASE.2023-05-18T00-05-36Z and prior to RELEASE.2026-04-11T03-20-12Z, an authentication bypass vulnerability in MinIO's STREAMING-UNSIGNED-PAYLOAD-TRAILER code path allows any user who knows a valid access key to write arbitrary objects to any bucket without knowing the secret key or providing a valid cryptographic signature. Any MinIO deployment is impacted. The attack requires only a valid access key (the well-known default minioadmin, or any key with WRITE permission on a bucket) and a target bucket name. PutObjectHandler and PutObjectPartHandler call newUnsignedV4ChunkedReader with a signature verification gate based solely on the presence of the Authorization header. Meanwhile, isPutActionAllowed extracts credentials from either the Authorization header or the X-Amz-Credential query parameter, and trusts whichever it finds. An attacker omits the Authorization header and supplies credentials exclusively via the query string. The signature gate evaluates to false, doesSignatureMatch is never called, and the request proceeds with the permissions of the impersonated access key. This affects PutObjectHandler (standard and tables/warehouse bucket paths) and PutObjectPartHandler (multipart uploads). Users of the open-source minio/minio project should upgrade to MinIO AIStor RELEASE.2026-04-11T03-20-12Z or later. If upgrading is not immediately possible, block unsigned-trailer requests at the load balancer. Reject any request containing X-Amz-Content-Sha256: STREAMING-UNSIGNED-PAYLOAD-TRAILER at the reverse proxy or WAF layer. Clients can use STREAMING-AWS4-HMAC-SHA256-PAYLOAD-TRAILER (the signed variant) instead. Alternatively, restrict WRITE permissions. Limit s3:PutObject grants to trusted principals. While this reduces the attack surface, it does not eliminate the vulnerability since any user with WRITE permission can exploit it with only their access key.

Affected packages

Versions sourced from the GitHub Security Advisory.

PackageAffected versionsPatched versions
github.com/minio/minioGo
>= 0.0.0-20230506025312-76913a9fd5c6, <= 0.0.0-20260212201848-7aac2a2c5b7c

Affected products

1
  • cpe:2.3:a:minio:minio:*:*:*:*:*:*:*:*
    Range: >=2023-05-18t00-05-36z,<2026-04-11T03-20-12Z

Patches

1
76913a9fd5c6

Signed trailers for signature v4 (#16484)

https://github.com/minio/minioKlaus PostMay 6, 2023via ghsa
17 files changed · +918 281
  • cmd/apierrorcode_string.go+189 188 modified
    @@ -131,197 +131,198 @@ func _() {
     	_ = x[ErrInvalidTagDirective-120]
     	_ = x[ErrPolicyAlreadyAttached-121]
     	_ = x[ErrPolicyNotAttached-122]
    -	_ = x[ErrInvalidEncryptionMethod-123]
    -	_ = x[ErrInvalidEncryptionKeyID-124]
    -	_ = x[ErrInsecureSSECustomerRequest-125]
    -	_ = x[ErrSSEMultipartEncrypted-126]
    -	_ = x[ErrSSEEncryptedObject-127]
    -	_ = x[ErrInvalidEncryptionParameters-128]
    -	_ = x[ErrInvalidEncryptionParametersSSEC-129]
    -	_ = x[ErrInvalidSSECustomerAlgorithm-130]
    -	_ = x[ErrInvalidSSECustomerKey-131]
    -	_ = x[ErrMissingSSECustomerKey-132]
    -	_ = x[ErrMissingSSECustomerKeyMD5-133]
    -	_ = x[ErrSSECustomerKeyMD5Mismatch-134]
    -	_ = x[ErrInvalidSSECustomerParameters-135]
    -	_ = x[ErrIncompatibleEncryptionMethod-136]
    -	_ = x[ErrKMSNotConfigured-137]
    -	_ = x[ErrKMSKeyNotFoundException-138]
    -	_ = x[ErrKMSDefaultKeyAlreadyConfigured-139]
    -	_ = x[ErrNoAccessKey-140]
    -	_ = x[ErrInvalidToken-141]
    -	_ = x[ErrEventNotification-142]
    -	_ = x[ErrARNNotification-143]
    -	_ = x[ErrRegionNotification-144]
    -	_ = x[ErrOverlappingFilterNotification-145]
    -	_ = x[ErrFilterNameInvalid-146]
    -	_ = x[ErrFilterNamePrefix-147]
    -	_ = x[ErrFilterNameSuffix-148]
    -	_ = x[ErrFilterValueInvalid-149]
    -	_ = x[ErrOverlappingConfigs-150]
    -	_ = x[ErrUnsupportedNotification-151]
    -	_ = x[ErrContentSHA256Mismatch-152]
    -	_ = x[ErrContentChecksumMismatch-153]
    -	_ = x[ErrStorageFull-154]
    -	_ = x[ErrRequestBodyParse-155]
    -	_ = x[ErrObjectExistsAsDirectory-156]
    -	_ = x[ErrInvalidObjectName-157]
    -	_ = x[ErrInvalidObjectNamePrefixSlash-158]
    -	_ = x[ErrInvalidResourceName-159]
    -	_ = x[ErrServerNotInitialized-160]
    -	_ = x[ErrOperationTimedOut-161]
    -	_ = x[ErrClientDisconnected-162]
    -	_ = x[ErrOperationMaxedOut-163]
    -	_ = x[ErrInvalidRequest-164]
    -	_ = x[ErrTransitionStorageClassNotFoundError-165]
    -	_ = x[ErrInvalidStorageClass-166]
    -	_ = x[ErrBackendDown-167]
    -	_ = x[ErrMalformedJSON-168]
    -	_ = x[ErrAdminNoSuchUser-169]
    -	_ = x[ErrAdminNoSuchGroup-170]
    -	_ = x[ErrAdminGroupNotEmpty-171]
    -	_ = x[ErrAdminGroupDisabled-172]
    -	_ = x[ErrAdminNoSuchJob-173]
    -	_ = x[ErrAdminNoSuchPolicy-174]
    -	_ = x[ErrAdminPolicyChangeAlreadyApplied-175]
    -	_ = x[ErrAdminInvalidArgument-176]
    -	_ = x[ErrAdminInvalidAccessKey-177]
    -	_ = x[ErrAdminInvalidSecretKey-178]
    -	_ = x[ErrAdminConfigNoQuorum-179]
    -	_ = x[ErrAdminConfigTooLarge-180]
    -	_ = x[ErrAdminConfigBadJSON-181]
    -	_ = x[ErrAdminNoSuchConfigTarget-182]
    -	_ = x[ErrAdminConfigEnvOverridden-183]
    -	_ = x[ErrAdminConfigDuplicateKeys-184]
    -	_ = x[ErrAdminConfigInvalidIDPType-185]
    -	_ = x[ErrAdminConfigLDAPNonDefaultConfigName-186]
    -	_ = x[ErrAdminConfigLDAPValidation-187]
    -	_ = x[ErrAdminConfigIDPCfgNameAlreadyExists-188]
    -	_ = x[ErrAdminConfigIDPCfgNameDoesNotExist-189]
    -	_ = x[ErrAdminCredentialsMismatch-190]
    -	_ = x[ErrInsecureClientRequest-191]
    -	_ = x[ErrObjectTampered-192]
    -	_ = x[ErrSiteReplicationInvalidRequest-193]
    -	_ = x[ErrSiteReplicationPeerResp-194]
    -	_ = x[ErrSiteReplicationBackendIssue-195]
    -	_ = x[ErrSiteReplicationServiceAccountError-196]
    -	_ = x[ErrSiteReplicationBucketConfigError-197]
    -	_ = x[ErrSiteReplicationBucketMetaError-198]
    -	_ = x[ErrSiteReplicationIAMError-199]
    -	_ = x[ErrSiteReplicationConfigMissing-200]
    -	_ = x[ErrAdminRebalanceAlreadyStarted-201]
    -	_ = x[ErrAdminRebalanceNotStarted-202]
    -	_ = x[ErrAdminBucketQuotaExceeded-203]
    -	_ = x[ErrAdminNoSuchQuotaConfiguration-204]
    -	_ = x[ErrHealNotImplemented-205]
    -	_ = x[ErrHealNoSuchProcess-206]
    -	_ = x[ErrHealInvalidClientToken-207]
    -	_ = x[ErrHealMissingBucket-208]
    -	_ = x[ErrHealAlreadyRunning-209]
    -	_ = x[ErrHealOverlappingPaths-210]
    -	_ = x[ErrIncorrectContinuationToken-211]
    -	_ = x[ErrEmptyRequestBody-212]
    -	_ = x[ErrUnsupportedFunction-213]
    -	_ = x[ErrInvalidExpressionType-214]
    -	_ = x[ErrBusy-215]
    -	_ = x[ErrUnauthorizedAccess-216]
    -	_ = x[ErrExpressionTooLong-217]
    -	_ = x[ErrIllegalSQLFunctionArgument-218]
    -	_ = x[ErrInvalidKeyPath-219]
    -	_ = x[ErrInvalidCompressionFormat-220]
    -	_ = x[ErrInvalidFileHeaderInfo-221]
    -	_ = x[ErrInvalidJSONType-222]
    -	_ = x[ErrInvalidQuoteFields-223]
    -	_ = x[ErrInvalidRequestParameter-224]
    -	_ = x[ErrInvalidDataType-225]
    -	_ = x[ErrInvalidTextEncoding-226]
    -	_ = x[ErrInvalidDataSource-227]
    -	_ = x[ErrInvalidTableAlias-228]
    -	_ = x[ErrMissingRequiredParameter-229]
    -	_ = x[ErrObjectSerializationConflict-230]
    -	_ = x[ErrUnsupportedSQLOperation-231]
    -	_ = x[ErrUnsupportedSQLStructure-232]
    -	_ = x[ErrUnsupportedSyntax-233]
    -	_ = x[ErrUnsupportedRangeHeader-234]
    -	_ = x[ErrLexerInvalidChar-235]
    -	_ = x[ErrLexerInvalidOperator-236]
    -	_ = x[ErrLexerInvalidLiteral-237]
    -	_ = x[ErrLexerInvalidIONLiteral-238]
    -	_ = x[ErrParseExpectedDatePart-239]
    -	_ = x[ErrParseExpectedKeyword-240]
    -	_ = x[ErrParseExpectedTokenType-241]
    -	_ = x[ErrParseExpected2TokenTypes-242]
    -	_ = x[ErrParseExpectedNumber-243]
    -	_ = x[ErrParseExpectedRightParenBuiltinFunctionCall-244]
    -	_ = x[ErrParseExpectedTypeName-245]
    -	_ = x[ErrParseExpectedWhenClause-246]
    -	_ = x[ErrParseUnsupportedToken-247]
    -	_ = x[ErrParseUnsupportedLiteralsGroupBy-248]
    -	_ = x[ErrParseExpectedMember-249]
    -	_ = x[ErrParseUnsupportedSelect-250]
    -	_ = x[ErrParseUnsupportedCase-251]
    -	_ = x[ErrParseUnsupportedCaseClause-252]
    -	_ = x[ErrParseUnsupportedAlias-253]
    -	_ = x[ErrParseUnsupportedSyntax-254]
    -	_ = x[ErrParseUnknownOperator-255]
    -	_ = x[ErrParseMissingIdentAfterAt-256]
    -	_ = x[ErrParseUnexpectedOperator-257]
    -	_ = x[ErrParseUnexpectedTerm-258]
    -	_ = x[ErrParseUnexpectedToken-259]
    -	_ = x[ErrParseUnexpectedKeyword-260]
    -	_ = x[ErrParseExpectedExpression-261]
    -	_ = x[ErrParseExpectedLeftParenAfterCast-262]
    -	_ = x[ErrParseExpectedLeftParenValueConstructor-263]
    -	_ = x[ErrParseExpectedLeftParenBuiltinFunctionCall-264]
    -	_ = x[ErrParseExpectedArgumentDelimiter-265]
    -	_ = x[ErrParseCastArity-266]
    -	_ = x[ErrParseInvalidTypeParam-267]
    -	_ = x[ErrParseEmptySelect-268]
    -	_ = x[ErrParseSelectMissingFrom-269]
    -	_ = x[ErrParseExpectedIdentForGroupName-270]
    -	_ = x[ErrParseExpectedIdentForAlias-271]
    -	_ = x[ErrParseUnsupportedCallWithStar-272]
    -	_ = x[ErrParseNonUnaryAgregateFunctionCall-273]
    -	_ = x[ErrParseMalformedJoin-274]
    -	_ = x[ErrParseExpectedIdentForAt-275]
    -	_ = x[ErrParseAsteriskIsNotAloneInSelectList-276]
    -	_ = x[ErrParseCannotMixSqbAndWildcardInSelectList-277]
    -	_ = x[ErrParseInvalidContextForWildcardInSelectList-278]
    -	_ = x[ErrIncorrectSQLFunctionArgumentType-279]
    -	_ = x[ErrValueParseFailure-280]
    -	_ = x[ErrEvaluatorInvalidArguments-281]
    -	_ = x[ErrIntegerOverflow-282]
    -	_ = x[ErrLikeInvalidInputs-283]
    -	_ = x[ErrCastFailed-284]
    -	_ = x[ErrInvalidCast-285]
    -	_ = x[ErrEvaluatorInvalidTimestampFormatPattern-286]
    -	_ = x[ErrEvaluatorInvalidTimestampFormatPatternSymbolForParsing-287]
    -	_ = x[ErrEvaluatorTimestampFormatPatternDuplicateFields-288]
    -	_ = x[ErrEvaluatorTimestampFormatPatternHourClockAmPmMismatch-289]
    -	_ = x[ErrEvaluatorUnterminatedTimestampFormatPatternToken-290]
    -	_ = x[ErrEvaluatorInvalidTimestampFormatPatternToken-291]
    -	_ = x[ErrEvaluatorInvalidTimestampFormatPatternSymbol-292]
    -	_ = x[ErrEvaluatorBindingDoesNotExist-293]
    -	_ = x[ErrMissingHeaders-294]
    -	_ = x[ErrInvalidColumnIndex-295]
    -	_ = x[ErrAdminConfigNotificationTargetsFailed-296]
    -	_ = x[ErrAdminProfilerNotEnabled-297]
    -	_ = x[ErrInvalidDecompressedSize-298]
    -	_ = x[ErrAddUserInvalidArgument-299]
    -	_ = x[ErrAdminResourceInvalidArgument-300]
    -	_ = x[ErrAdminAccountNotEligible-301]
    -	_ = x[ErrAccountNotEligible-302]
    -	_ = x[ErrAdminServiceAccountNotFound-303]
    -	_ = x[ErrPostPolicyConditionInvalidFormat-304]
    -	_ = x[ErrInvalidChecksum-305]
    -	_ = x[ErrLambdaARNInvalid-306]
    -	_ = x[ErrLambdaARNNotFound-307]
    -	_ = x[apiErrCodeEnd-308]
    +	_ = x[ErrExcessData-123]
    +	_ = x[ErrInvalidEncryptionMethod-124]
    +	_ = x[ErrInvalidEncryptionKeyID-125]
    +	_ = x[ErrInsecureSSECustomerRequest-126]
    +	_ = x[ErrSSEMultipartEncrypted-127]
    +	_ = x[ErrSSEEncryptedObject-128]
    +	_ = x[ErrInvalidEncryptionParameters-129]
    +	_ = x[ErrInvalidEncryptionParametersSSEC-130]
    +	_ = x[ErrInvalidSSECustomerAlgorithm-131]
    +	_ = x[ErrInvalidSSECustomerKey-132]
    +	_ = x[ErrMissingSSECustomerKey-133]
    +	_ = x[ErrMissingSSECustomerKeyMD5-134]
    +	_ = x[ErrSSECustomerKeyMD5Mismatch-135]
    +	_ = x[ErrInvalidSSECustomerParameters-136]
    +	_ = x[ErrIncompatibleEncryptionMethod-137]
    +	_ = x[ErrKMSNotConfigured-138]
    +	_ = x[ErrKMSKeyNotFoundException-139]
    +	_ = x[ErrKMSDefaultKeyAlreadyConfigured-140]
    +	_ = x[ErrNoAccessKey-141]
    +	_ = x[ErrInvalidToken-142]
    +	_ = x[ErrEventNotification-143]
    +	_ = x[ErrARNNotification-144]
    +	_ = x[ErrRegionNotification-145]
    +	_ = x[ErrOverlappingFilterNotification-146]
    +	_ = x[ErrFilterNameInvalid-147]
    +	_ = x[ErrFilterNamePrefix-148]
    +	_ = x[ErrFilterNameSuffix-149]
    +	_ = x[ErrFilterValueInvalid-150]
    +	_ = x[ErrOverlappingConfigs-151]
    +	_ = x[ErrUnsupportedNotification-152]
    +	_ = x[ErrContentSHA256Mismatch-153]
    +	_ = x[ErrContentChecksumMismatch-154]
    +	_ = x[ErrStorageFull-155]
    +	_ = x[ErrRequestBodyParse-156]
    +	_ = x[ErrObjectExistsAsDirectory-157]
    +	_ = x[ErrInvalidObjectName-158]
    +	_ = x[ErrInvalidObjectNamePrefixSlash-159]
    +	_ = x[ErrInvalidResourceName-160]
    +	_ = x[ErrServerNotInitialized-161]
    +	_ = x[ErrOperationTimedOut-162]
    +	_ = x[ErrClientDisconnected-163]
    +	_ = x[ErrOperationMaxedOut-164]
    +	_ = x[ErrInvalidRequest-165]
    +	_ = x[ErrTransitionStorageClassNotFoundError-166]
    +	_ = x[ErrInvalidStorageClass-167]
    +	_ = x[ErrBackendDown-168]
    +	_ = x[ErrMalformedJSON-169]
    +	_ = x[ErrAdminNoSuchUser-170]
    +	_ = x[ErrAdminNoSuchGroup-171]
    +	_ = x[ErrAdminGroupNotEmpty-172]
    +	_ = x[ErrAdminGroupDisabled-173]
    +	_ = x[ErrAdminNoSuchJob-174]
    +	_ = x[ErrAdminNoSuchPolicy-175]
    +	_ = x[ErrAdminPolicyChangeAlreadyApplied-176]
    +	_ = x[ErrAdminInvalidArgument-177]
    +	_ = x[ErrAdminInvalidAccessKey-178]
    +	_ = x[ErrAdminInvalidSecretKey-179]
    +	_ = x[ErrAdminConfigNoQuorum-180]
    +	_ = x[ErrAdminConfigTooLarge-181]
    +	_ = x[ErrAdminConfigBadJSON-182]
    +	_ = x[ErrAdminNoSuchConfigTarget-183]
    +	_ = x[ErrAdminConfigEnvOverridden-184]
    +	_ = x[ErrAdminConfigDuplicateKeys-185]
    +	_ = x[ErrAdminConfigInvalidIDPType-186]
    +	_ = x[ErrAdminConfigLDAPNonDefaultConfigName-187]
    +	_ = x[ErrAdminConfigLDAPValidation-188]
    +	_ = x[ErrAdminConfigIDPCfgNameAlreadyExists-189]
    +	_ = x[ErrAdminConfigIDPCfgNameDoesNotExist-190]
    +	_ = x[ErrAdminCredentialsMismatch-191]
    +	_ = x[ErrInsecureClientRequest-192]
    +	_ = x[ErrObjectTampered-193]
    +	_ = x[ErrSiteReplicationInvalidRequest-194]
    +	_ = x[ErrSiteReplicationPeerResp-195]
    +	_ = x[ErrSiteReplicationBackendIssue-196]
    +	_ = x[ErrSiteReplicationServiceAccountError-197]
    +	_ = x[ErrSiteReplicationBucketConfigError-198]
    +	_ = x[ErrSiteReplicationBucketMetaError-199]
    +	_ = x[ErrSiteReplicationIAMError-200]
    +	_ = x[ErrSiteReplicationConfigMissing-201]
    +	_ = x[ErrAdminRebalanceAlreadyStarted-202]
    +	_ = x[ErrAdminRebalanceNotStarted-203]
    +	_ = x[ErrAdminBucketQuotaExceeded-204]
    +	_ = x[ErrAdminNoSuchQuotaConfiguration-205]
    +	_ = x[ErrHealNotImplemented-206]
    +	_ = x[ErrHealNoSuchProcess-207]
    +	_ = x[ErrHealInvalidClientToken-208]
    +	_ = x[ErrHealMissingBucket-209]
    +	_ = x[ErrHealAlreadyRunning-210]
    +	_ = x[ErrHealOverlappingPaths-211]
    +	_ = x[ErrIncorrectContinuationToken-212]
    +	_ = x[ErrEmptyRequestBody-213]
    +	_ = x[ErrUnsupportedFunction-214]
    +	_ = x[ErrInvalidExpressionType-215]
    +	_ = x[ErrBusy-216]
    +	_ = x[ErrUnauthorizedAccess-217]
    +	_ = x[ErrExpressionTooLong-218]
    +	_ = x[ErrIllegalSQLFunctionArgument-219]
    +	_ = x[ErrInvalidKeyPath-220]
    +	_ = x[ErrInvalidCompressionFormat-221]
    +	_ = x[ErrInvalidFileHeaderInfo-222]
    +	_ = x[ErrInvalidJSONType-223]
    +	_ = x[ErrInvalidQuoteFields-224]
    +	_ = x[ErrInvalidRequestParameter-225]
    +	_ = x[ErrInvalidDataType-226]
    +	_ = x[ErrInvalidTextEncoding-227]
    +	_ = x[ErrInvalidDataSource-228]
    +	_ = x[ErrInvalidTableAlias-229]
    +	_ = x[ErrMissingRequiredParameter-230]
    +	_ = x[ErrObjectSerializationConflict-231]
    +	_ = x[ErrUnsupportedSQLOperation-232]
    +	_ = x[ErrUnsupportedSQLStructure-233]
    +	_ = x[ErrUnsupportedSyntax-234]
    +	_ = x[ErrUnsupportedRangeHeader-235]
    +	_ = x[ErrLexerInvalidChar-236]
    +	_ = x[ErrLexerInvalidOperator-237]
    +	_ = x[ErrLexerInvalidLiteral-238]
    +	_ = x[ErrLexerInvalidIONLiteral-239]
    +	_ = x[ErrParseExpectedDatePart-240]
    +	_ = x[ErrParseExpectedKeyword-241]
    +	_ = x[ErrParseExpectedTokenType-242]
    +	_ = x[ErrParseExpected2TokenTypes-243]
    +	_ = x[ErrParseExpectedNumber-244]
    +	_ = x[ErrParseExpectedRightParenBuiltinFunctionCall-245]
    +	_ = x[ErrParseExpectedTypeName-246]
    +	_ = x[ErrParseExpectedWhenClause-247]
    +	_ = x[ErrParseUnsupportedToken-248]
    +	_ = x[ErrParseUnsupportedLiteralsGroupBy-249]
    +	_ = x[ErrParseExpectedMember-250]
    +	_ = x[ErrParseUnsupportedSelect-251]
    +	_ = x[ErrParseUnsupportedCase-252]
    +	_ = x[ErrParseUnsupportedCaseClause-253]
    +	_ = x[ErrParseUnsupportedAlias-254]
    +	_ = x[ErrParseUnsupportedSyntax-255]
    +	_ = x[ErrParseUnknownOperator-256]
    +	_ = x[ErrParseMissingIdentAfterAt-257]
    +	_ = x[ErrParseUnexpectedOperator-258]
    +	_ = x[ErrParseUnexpectedTerm-259]
    +	_ = x[ErrParseUnexpectedToken-260]
    +	_ = x[ErrParseUnexpectedKeyword-261]
    +	_ = x[ErrParseExpectedExpression-262]
    +	_ = x[ErrParseExpectedLeftParenAfterCast-263]
    +	_ = x[ErrParseExpectedLeftParenValueConstructor-264]
    +	_ = x[ErrParseExpectedLeftParenBuiltinFunctionCall-265]
    +	_ = x[ErrParseExpectedArgumentDelimiter-266]
    +	_ = x[ErrParseCastArity-267]
    +	_ = x[ErrParseInvalidTypeParam-268]
    +	_ = x[ErrParseEmptySelect-269]
    +	_ = x[ErrParseSelectMissingFrom-270]
    +	_ = x[ErrParseExpectedIdentForGroupName-271]
    +	_ = x[ErrParseExpectedIdentForAlias-272]
    +	_ = x[ErrParseUnsupportedCallWithStar-273]
    +	_ = x[ErrParseNonUnaryAgregateFunctionCall-274]
    +	_ = x[ErrParseMalformedJoin-275]
    +	_ = x[ErrParseExpectedIdentForAt-276]
    +	_ = x[ErrParseAsteriskIsNotAloneInSelectList-277]
    +	_ = x[ErrParseCannotMixSqbAndWildcardInSelectList-278]
    +	_ = x[ErrParseInvalidContextForWildcardInSelectList-279]
    +	_ = x[ErrIncorrectSQLFunctionArgumentType-280]
    +	_ = x[ErrValueParseFailure-281]
    +	_ = x[ErrEvaluatorInvalidArguments-282]
    +	_ = x[ErrIntegerOverflow-283]
    +	_ = x[ErrLikeInvalidInputs-284]
    +	_ = x[ErrCastFailed-285]
    +	_ = x[ErrInvalidCast-286]
    +	_ = x[ErrEvaluatorInvalidTimestampFormatPattern-287]
    +	_ = x[ErrEvaluatorInvalidTimestampFormatPatternSymbolForParsing-288]
    +	_ = x[ErrEvaluatorTimestampFormatPatternDuplicateFields-289]
    +	_ = x[ErrEvaluatorTimestampFormatPatternHourClockAmPmMismatch-290]
    +	_ = x[ErrEvaluatorUnterminatedTimestampFormatPatternToken-291]
    +	_ = x[ErrEvaluatorInvalidTimestampFormatPatternToken-292]
    +	_ = x[ErrEvaluatorInvalidTimestampFormatPatternSymbol-293]
    +	_ = x[ErrEvaluatorBindingDoesNotExist-294]
    +	_ = x[ErrMissingHeaders-295]
    +	_ = x[ErrInvalidColumnIndex-296]
    +	_ = x[ErrAdminConfigNotificationTargetsFailed-297]
    +	_ = x[ErrAdminProfilerNotEnabled-298]
    +	_ = x[ErrInvalidDecompressedSize-299]
    +	_ = x[ErrAddUserInvalidArgument-300]
    +	_ = x[ErrAdminResourceInvalidArgument-301]
    +	_ = x[ErrAdminAccountNotEligible-302]
    +	_ = x[ErrAccountNotEligible-303]
    +	_ = x[ErrAdminServiceAccountNotFound-304]
    +	_ = x[ErrPostPolicyConditionInvalidFormat-305]
    +	_ = x[ErrInvalidChecksum-306]
    +	_ = x[ErrLambdaARNInvalid-307]
    +	_ = x[ErrLambdaARNNotFound-308]
    +	_ = x[apiErrCodeEnd-309]
     }
     
    -const _APIErrorCode_name = "NoneAccessDeniedBadDigestEntityTooSmallEntityTooLargePolicyTooLargeIncompleteBodyInternalErrorInvalidAccessKeyIDAccessKeyDisabledInvalidBucketNameInvalidDigestInvalidRangeInvalidRangePartNumberInvalidCopyPartRangeInvalidCopyPartRangeSourceInvalidMaxKeysInvalidEncodingMethodInvalidMaxUploadsInvalidMaxPartsInvalidPartNumberMarkerInvalidPartNumberInvalidRequestBodyInvalidCopySourceInvalidMetadataDirectiveInvalidCopyDestInvalidPolicyDocumentInvalidObjectStateMalformedXMLMissingContentLengthMissingContentMD5MissingRequestBodyErrorMissingSecurityHeaderNoSuchBucketNoSuchBucketPolicyNoSuchBucketLifecycleNoSuchLifecycleConfigurationInvalidLifecycleWithObjectLockNoSuchBucketSSEConfigNoSuchCORSConfigurationNoSuchWebsiteConfigurationReplicationConfigurationNotFoundErrorRemoteDestinationNotFoundErrorReplicationDestinationMissingLockRemoteTargetNotFoundErrorReplicationRemoteConnectionErrorReplicationBandwidthLimitErrorBucketRemoteIdenticalToSourceBucketRemoteAlreadyExistsBucketRemoteLabelInUseBucketRemoteArnTypeInvalidBucketRemoteArnInvalidBucketRemoteRemoveDisallowedRemoteTargetNotVersionedErrorReplicationSourceNotVersionedErrorReplicationNeedsVersioningErrorReplicationBucketNeedsVersioningErrorReplicationDenyEditErrorRemoteTargetDenyEditErrorReplicationNoExistingObjectsObjectRestoreAlreadyInProgressNoSuchKeyNoSuchUploadInvalidVersionIDNoSuchVersionNotImplementedPreconditionFailedRequestTimeTooSkewedSignatureDoesNotMatchMethodNotAllowedInvalidPartInvalidPartOrderAuthorizationHeaderMalformedMalformedPOSTRequestPOSTFileRequiredSignatureVersionNotSupportedBucketNotEmptyAllAccessDisabledPolicyInvalidVersionMissingFieldsMissingCredTagCredMalformedInvalidRegionInvalidServiceS3InvalidServiceSTSInvalidRequestVersionMissingSignTagMissingSignHeadersTagMalformedDateMalformedPresignedDateMalformedCredentialDateMalformedExpiresNegativeExpiresAuthHeaderEmptyExpiredPresignRequestRequestNotReadyYetUnsignedHeadersMissingDateHeaderInvalidQuerySignatureAlgoInvalidQueryParamsBucketAlreadyOwnedByYouInvalidDurationBucketAlreadyExistsMetadataTooLargeUnsupportedMetadataMaximumExpiresSlowDownInvalidPrefixMarkerBadRequestKeyTooLongErrorInvalidBucketObjectLockConfigurationObjectLockConfigurationNotFoundObjectLockConfigurationNotAllowedNoSuchObjectLockConfigurationObjectLockedInvalidRetentionDatePastObjectLockRetainDateUnknownWORMModeDirectiveBucketTaggingNotFoundObjectLockInvalidHeadersInvalidTagDirectivePolicyAlreadyAttachedPolicyNotAttachedInvalidEncryptionMethodInvalidEncryptionKeyIDInsecureSSECustomerRequestSSEMultipartEncryptedSSEEncryptedObjectInvalidEncryptionParametersInvalidEncryptionParametersSSECInvalidSSECustomerAlgorithmInvalidSSECustomerKeyMissingSSECustomerKeyMissingSSECustomerKeyMD5SSECustomerKeyMD5MismatchInvalidSSECustomerParametersIncompatibleEncryptionMethodKMSNotConfiguredKMSKeyNotFoundExceptionKMSDefaultKeyAlreadyConfiguredNoAccessKeyInvalidTokenEventNotificationARNNotificationRegionNotificationOverlappingFilterNotificationFilterNameInvalidFilterNamePrefixFilterNameSuffixFilterValueInvalidOverlappingConfigsUnsupportedNotificationContentSHA256MismatchContentChecksumMismatchStorageFullRequestBodyParseObjectExistsAsDirectoryInvalidObjectNameInvalidObjectNamePrefixSlashInvalidResourceNameServerNotInitializedOperationTimedOutClientDisconnectedOperationMaxedOutInvalidRequestTransitionStorageClassNotFoundErrorInvalidStorageClassBackendDownMalformedJSONAdminNoSuchUserAdminNoSuchGroupAdminGroupNotEmptyAdminGroupDisabledAdminNoSuchJobAdminNoSuchPolicyAdminPolicyChangeAlreadyAppliedAdminInvalidArgumentAdminInvalidAccessKeyAdminInvalidSecretKeyAdminConfigNoQuorumAdminConfigTooLargeAdminConfigBadJSONAdminNoSuchConfigTargetAdminConfigEnvOverriddenAdminConfigDuplicateKeysAdminConfigInvalidIDPTypeAdminConfigLDAPNonDefaultConfigNameAdminConfigLDAPValidationAdminConfigIDPCfgNameAlreadyExistsAdminConfigIDPCfgNameDoesNotExistAdminCredentialsMismatchInsecureClientRequestObjectTamperedSiteReplicationInvalidRequestSiteReplicationPeerRespSiteReplicationBackendIssueSiteReplicationServiceAccountErrorSiteReplicationBucketConfigErrorSiteReplicationBucketMetaErrorSiteReplicationIAMErrorSiteReplicationConfigMissingAdminRebalanceAlreadyStartedAdminRebalanceNotStartedAdminBucketQuotaExceededAdminNoSuchQuotaConfigurationHealNotImplementedHealNoSuchProcessHealInvalidClientTokenHealMissingBucketHealAlreadyRunningHealOverlappingPathsIncorrectContinuationTokenEmptyRequestBodyUnsupportedFunctionInvalidExpressionTypeBusyUnauthorizedAccessExpressionTooLongIllegalSQLFunctionArgumentInvalidKeyPathInvalidCompressionFormatInvalidFileHeaderInfoInvalidJSONTypeInvalidQuoteFieldsInvalidRequestParameterInvalidDataTypeInvalidTextEncodingInvalidDataSourceInvalidTableAliasMissingRequiredParameterObjectSerializationConflictUnsupportedSQLOperationUnsupportedSQLStructureUnsupportedSyntaxUnsupportedRangeHeaderLexerInvalidCharLexerInvalidOperatorLexerInvalidLiteralLexerInvalidIONLiteralParseExpectedDatePartParseExpectedKeywordParseExpectedTokenTypeParseExpected2TokenTypesParseExpectedNumberParseExpectedRightParenBuiltinFunctionCallParseExpectedTypeNameParseExpectedWhenClauseParseUnsupportedTokenParseUnsupportedLiteralsGroupByParseExpectedMemberParseUnsupportedSelectParseUnsupportedCaseParseUnsupportedCaseClauseParseUnsupportedAliasParseUnsupportedSyntaxParseUnknownOperatorParseMissingIdentAfterAtParseUnexpectedOperatorParseUnexpectedTermParseUnexpectedTokenParseUnexpectedKeywordParseExpectedExpressionParseExpectedLeftParenAfterCastParseExpectedLeftParenValueConstructorParseExpectedLeftParenBuiltinFunctionCallParseExpectedArgumentDelimiterParseCastArityParseInvalidTypeParamParseEmptySelectParseSelectMissingFromParseExpectedIdentForGroupNameParseExpectedIdentForAliasParseUnsupportedCallWithStarParseNonUnaryAgregateFunctionCallParseMalformedJoinParseExpectedIdentForAtParseAsteriskIsNotAloneInSelectListParseCannotMixSqbAndWildcardInSelectListParseInvalidContextForWildcardInSelectListIncorrectSQLFunctionArgumentTypeValueParseFailureEvaluatorInvalidArgumentsIntegerOverflowLikeInvalidInputsCastFailedInvalidCastEvaluatorInvalidTimestampFormatPatternEvaluatorInvalidTimestampFormatPatternSymbolForParsingEvaluatorTimestampFormatPatternDuplicateFieldsEvaluatorTimestampFormatPatternHourClockAmPmMismatchEvaluatorUnterminatedTimestampFormatPatternTokenEvaluatorInvalidTimestampFormatPatternTokenEvaluatorInvalidTimestampFormatPatternSymbolEvaluatorBindingDoesNotExistMissingHeadersInvalidColumnIndexAdminConfigNotificationTargetsFailedAdminProfilerNotEnabledInvalidDecompressedSizeAddUserInvalidArgumentAdminResourceInvalidArgumentAdminAccountNotEligibleAccountNotEligibleAdminServiceAccountNotFoundPostPolicyConditionInvalidFormatInvalidChecksumLambdaARNInvalidLambdaARNNotFoundapiErrCodeEnd"
    +const _APIErrorCode_name = "NoneAccessDeniedBadDigestEntityTooSmallEntityTooLargePolicyTooLargeIncompleteBodyInternalErrorInvalidAccessKeyIDAccessKeyDisabledInvalidBucketNameInvalidDigestInvalidRangeInvalidRangePartNumberInvalidCopyPartRangeInvalidCopyPartRangeSourceInvalidMaxKeysInvalidEncodingMethodInvalidMaxUploadsInvalidMaxPartsInvalidPartNumberMarkerInvalidPartNumberInvalidRequestBodyInvalidCopySourceInvalidMetadataDirectiveInvalidCopyDestInvalidPolicyDocumentInvalidObjectStateMalformedXMLMissingContentLengthMissingContentMD5MissingRequestBodyErrorMissingSecurityHeaderNoSuchBucketNoSuchBucketPolicyNoSuchBucketLifecycleNoSuchLifecycleConfigurationInvalidLifecycleWithObjectLockNoSuchBucketSSEConfigNoSuchCORSConfigurationNoSuchWebsiteConfigurationReplicationConfigurationNotFoundErrorRemoteDestinationNotFoundErrorReplicationDestinationMissingLockRemoteTargetNotFoundErrorReplicationRemoteConnectionErrorReplicationBandwidthLimitErrorBucketRemoteIdenticalToSourceBucketRemoteAlreadyExistsBucketRemoteLabelInUseBucketRemoteArnTypeInvalidBucketRemoteArnInvalidBucketRemoteRemoveDisallowedRemoteTargetNotVersionedErrorReplicationSourceNotVersionedErrorReplicationNeedsVersioningErrorReplicationBucketNeedsVersioningErrorReplicationDenyEditErrorRemoteTargetDenyEditErrorReplicationNoExistingObjectsObjectRestoreAlreadyInProgressNoSuchKeyNoSuchUploadInvalidVersionIDNoSuchVersionNotImplementedPreconditionFailedRequestTimeTooSkewedSignatureDoesNotMatchMethodNotAllowedInvalidPartInvalidPartOrderAuthorizationHeaderMalformedMalformedPOSTRequestPOSTFileRequiredSignatureVersionNotSupportedBucketNotEmptyAllAccessDisabledPolicyInvalidVersionMissingFieldsMissingCredTagCredMalformedInvalidRegionInvalidServiceS3InvalidServiceSTSInvalidRequestVersionMissingSignTagMissingSignHeadersTagMalformedDateMalformedPresignedDateMalformedCredentialDateMalformedExpiresNegativeExpiresAuthHeaderEmptyExpiredPresignRequestRequestNotReadyYetUnsignedHeadersMissingDateHeaderInvalidQuerySignatureAlgoInvalidQueryParamsBucketAlreadyOwnedByYouInvalidDurationBucketAlreadyExistsMetadataTooLargeUnsupportedMetadataMaximumExpiresSlowDownInvalidPrefixMarkerBadRequestKeyTooLongErrorInvalidBucketObjectLockConfigurationObjectLockConfigurationNotFoundObjectLockConfigurationNotAllowedNoSuchObjectLockConfigurationObjectLockedInvalidRetentionDatePastObjectLockRetainDateUnknownWORMModeDirectiveBucketTaggingNotFoundObjectLockInvalidHeadersInvalidTagDirectivePolicyAlreadyAttachedPolicyNotAttachedExcessDataInvalidEncryptionMethodInvalidEncryptionKeyIDInsecureSSECustomerRequestSSEMultipartEncryptedSSEEncryptedObjectInvalidEncryptionParametersInvalidEncryptionParametersSSECInvalidSSECustomerAlgorithmInvalidSSECustomerKeyMissingSSECustomerKeyMissingSSECustomerKeyMD5SSECustomerKeyMD5MismatchInvalidSSECustomerParametersIncompatibleEncryptionMethodKMSNotConfiguredKMSKeyNotFoundExceptionKMSDefaultKeyAlreadyConfiguredNoAccessKeyInvalidTokenEventNotificationARNNotificationRegionNotificationOverlappingFilterNotificationFilterNameInvalidFilterNamePrefixFilterNameSuffixFilterValueInvalidOverlappingConfigsUnsupportedNotificationContentSHA256MismatchContentChecksumMismatchStorageFullRequestBodyParseObjectExistsAsDirectoryInvalidObjectNameInvalidObjectNamePrefixSlashInvalidResourceNameServerNotInitializedOperationTimedOutClientDisconnectedOperationMaxedOutInvalidRequestTransitionStorageClassNotFoundErrorInvalidStorageClassBackendDownMalformedJSONAdminNoSuchUserAdminNoSuchGroupAdminGroupNotEmptyAdminGroupDisabledAdminNoSuchJobAdminNoSuchPolicyAdminPolicyChangeAlreadyAppliedAdminInvalidArgumentAdminInvalidAccessKeyAdminInvalidSecretKeyAdminConfigNoQuorumAdminConfigTooLargeAdminConfigBadJSONAdminNoSuchConfigTargetAdminConfigEnvOverriddenAdminConfigDuplicateKeysAdminConfigInvalidIDPTypeAdminConfigLDAPNonDefaultConfigNameAdminConfigLDAPValidationAdminConfigIDPCfgNameAlreadyExistsAdminConfigIDPCfgNameDoesNotExistAdminCredentialsMismatchInsecureClientRequestObjectTamperedSiteReplicationInvalidRequestSiteReplicationPeerRespSiteReplicationBackendIssueSiteReplicationServiceAccountErrorSiteReplicationBucketConfigErrorSiteReplicationBucketMetaErrorSiteReplicationIAMErrorSiteReplicationConfigMissingAdminRebalanceAlreadyStartedAdminRebalanceNotStartedAdminBucketQuotaExceededAdminNoSuchQuotaConfigurationHealNotImplementedHealNoSuchProcessHealInvalidClientTokenHealMissingBucketHealAlreadyRunningHealOverlappingPathsIncorrectContinuationTokenEmptyRequestBodyUnsupportedFunctionInvalidExpressionTypeBusyUnauthorizedAccessExpressionTooLongIllegalSQLFunctionArgumentInvalidKeyPathInvalidCompressionFormatInvalidFileHeaderInfoInvalidJSONTypeInvalidQuoteFieldsInvalidRequestParameterInvalidDataTypeInvalidTextEncodingInvalidDataSourceInvalidTableAliasMissingRequiredParameterObjectSerializationConflictUnsupportedSQLOperationUnsupportedSQLStructureUnsupportedSyntaxUnsupportedRangeHeaderLexerInvalidCharLexerInvalidOperatorLexerInvalidLiteralLexerInvalidIONLiteralParseExpectedDatePartParseExpectedKeywordParseExpectedTokenTypeParseExpected2TokenTypesParseExpectedNumberParseExpectedRightParenBuiltinFunctionCallParseExpectedTypeNameParseExpectedWhenClauseParseUnsupportedTokenParseUnsupportedLiteralsGroupByParseExpectedMemberParseUnsupportedSelectParseUnsupportedCaseParseUnsupportedCaseClauseParseUnsupportedAliasParseUnsupportedSyntaxParseUnknownOperatorParseMissingIdentAfterAtParseUnexpectedOperatorParseUnexpectedTermParseUnexpectedTokenParseUnexpectedKeywordParseExpectedExpressionParseExpectedLeftParenAfterCastParseExpectedLeftParenValueConstructorParseExpectedLeftParenBuiltinFunctionCallParseExpectedArgumentDelimiterParseCastArityParseInvalidTypeParamParseEmptySelectParseSelectMissingFromParseExpectedIdentForGroupNameParseExpectedIdentForAliasParseUnsupportedCallWithStarParseNonUnaryAgregateFunctionCallParseMalformedJoinParseExpectedIdentForAtParseAsteriskIsNotAloneInSelectListParseCannotMixSqbAndWildcardInSelectListParseInvalidContextForWildcardInSelectListIncorrectSQLFunctionArgumentTypeValueParseFailureEvaluatorInvalidArgumentsIntegerOverflowLikeInvalidInputsCastFailedInvalidCastEvaluatorInvalidTimestampFormatPatternEvaluatorInvalidTimestampFormatPatternSymbolForParsingEvaluatorTimestampFormatPatternDuplicateFieldsEvaluatorTimestampFormatPatternHourClockAmPmMismatchEvaluatorUnterminatedTimestampFormatPatternTokenEvaluatorInvalidTimestampFormatPatternTokenEvaluatorInvalidTimestampFormatPatternSymbolEvaluatorBindingDoesNotExistMissingHeadersInvalidColumnIndexAdminConfigNotificationTargetsFailedAdminProfilerNotEnabledInvalidDecompressedSizeAddUserInvalidArgumentAdminResourceInvalidArgumentAdminAccountNotEligibleAccountNotEligibleAdminServiceAccountNotFoundPostPolicyConditionInvalidFormatInvalidChecksumLambdaARNInvalidLambdaARNNotFoundapiErrCodeEnd"
     
    -var _APIErrorCode_index = [...]uint16{0, 4, 16, 25, 39, 53, 67, 81, 94, 112, 129, 146, 159, 171, 193, 213, 239, 253, 274, 291, 306, 329, 346, 364, 381, 405, 420, 441, 459, 471, 491, 508, 531, 552, 564, 582, 603, 631, 661, 682, 705, 731, 768, 798, 831, 856, 888, 918, 947, 972, 994, 1020, 1042, 1070, 1099, 1133, 1164, 1201, 1225, 1250, 1278, 1308, 1317, 1329, 1345, 1358, 1372, 1390, 1410, 1431, 1447, 1458, 1474, 1502, 1522, 1538, 1566, 1580, 1597, 1617, 1630, 1644, 1657, 1670, 1686, 1703, 1724, 1738, 1759, 1772, 1794, 1817, 1833, 1848, 1863, 1884, 1902, 1917, 1934, 1959, 1977, 2000, 2015, 2034, 2050, 2069, 2083, 2091, 2110, 2120, 2135, 2171, 2202, 2235, 2264, 2276, 2296, 2320, 2344, 2365, 2389, 2408, 2429, 2446, 2469, 2491, 2517, 2538, 2556, 2583, 2614, 2641, 2662, 2683, 2707, 2732, 2760, 2788, 2804, 2827, 2857, 2868, 2880, 2897, 2912, 2930, 2959, 2976, 2992, 3008, 3026, 3044, 3067, 3088, 3111, 3122, 3138, 3161, 3178, 3206, 3225, 3245, 3262, 3280, 3297, 3311, 3346, 3365, 3376, 3389, 3404, 3420, 3438, 3456, 3470, 3487, 3518, 3538, 3559, 3580, 3599, 3618, 3636, 3659, 3683, 3707, 3732, 3767, 3792, 3826, 3859, 3883, 3904, 3918, 3947, 3970, 3997, 4031, 4063, 4093, 4116, 4144, 4172, 4196, 4220, 4249, 4267, 4284, 4306, 4323, 4341, 4361, 4387, 4403, 4422, 4443, 4447, 4465, 4482, 4508, 4522, 4546, 4567, 4582, 4600, 4623, 4638, 4657, 4674, 4691, 4715, 4742, 4765, 4788, 4805, 4827, 4843, 4863, 4882, 4904, 4925, 4945, 4967, 4991, 5010, 5052, 5073, 5096, 5117, 5148, 5167, 5189, 5209, 5235, 5256, 5278, 5298, 5322, 5345, 5364, 5384, 5406, 5429, 5460, 5498, 5539, 5569, 5583, 5604, 5620, 5642, 5672, 5698, 5726, 5759, 5777, 5800, 5835, 5875, 5917, 5949, 5966, 5991, 6006, 6023, 6033, 6044, 6082, 6136, 6182, 6234, 6282, 6325, 6369, 6397, 6411, 6429, 6465, 6488, 6511, 6533, 6561, 6584, 6602, 6629, 6661, 6676, 6692, 6709, 6722}
    +var _APIErrorCode_index = [...]uint16{0, 4, 16, 25, 39, 53, 67, 81, 94, 112, 129, 146, 159, 171, 193, 213, 239, 253, 274, 291, 306, 329, 346, 364, 381, 405, 420, 441, 459, 471, 491, 508, 531, 552, 564, 582, 603, 631, 661, 682, 705, 731, 768, 798, 831, 856, 888, 918, 947, 972, 994, 1020, 1042, 1070, 1099, 1133, 1164, 1201, 1225, 1250, 1278, 1308, 1317, 1329, 1345, 1358, 1372, 1390, 1410, 1431, 1447, 1458, 1474, 1502, 1522, 1538, 1566, 1580, 1597, 1617, 1630, 1644, 1657, 1670, 1686, 1703, 1724, 1738, 1759, 1772, 1794, 1817, 1833, 1848, 1863, 1884, 1902, 1917, 1934, 1959, 1977, 2000, 2015, 2034, 2050, 2069, 2083, 2091, 2110, 2120, 2135, 2171, 2202, 2235, 2264, 2276, 2296, 2320, 2344, 2365, 2389, 2408, 2429, 2446, 2456, 2479, 2501, 2527, 2548, 2566, 2593, 2624, 2651, 2672, 2693, 2717, 2742, 2770, 2798, 2814, 2837, 2867, 2878, 2890, 2907, 2922, 2940, 2969, 2986, 3002, 3018, 3036, 3054, 3077, 3098, 3121, 3132, 3148, 3171, 3188, 3216, 3235, 3255, 3272, 3290, 3307, 3321, 3356, 3375, 3386, 3399, 3414, 3430, 3448, 3466, 3480, 3497, 3528, 3548, 3569, 3590, 3609, 3628, 3646, 3669, 3693, 3717, 3742, 3777, 3802, 3836, 3869, 3893, 3914, 3928, 3957, 3980, 4007, 4041, 4073, 4103, 4126, 4154, 4182, 4206, 4230, 4259, 4277, 4294, 4316, 4333, 4351, 4371, 4397, 4413, 4432, 4453, 4457, 4475, 4492, 4518, 4532, 4556, 4577, 4592, 4610, 4633, 4648, 4667, 4684, 4701, 4725, 4752, 4775, 4798, 4815, 4837, 4853, 4873, 4892, 4914, 4935, 4955, 4977, 5001, 5020, 5062, 5083, 5106, 5127, 5158, 5177, 5199, 5219, 5245, 5266, 5288, 5308, 5332, 5355, 5374, 5394, 5416, 5439, 5470, 5508, 5549, 5579, 5593, 5614, 5630, 5652, 5682, 5708, 5736, 5769, 5787, 5810, 5845, 5885, 5927, 5959, 5976, 6001, 6016, 6033, 6043, 6054, 6092, 6146, 6192, 6244, 6292, 6335, 6379, 6407, 6421, 6439, 6475, 6498, 6521, 6543, 6571, 6594, 6612, 6639, 6671, 6686, 6702, 6719, 6732}
     
     func (i APIErrorCode) String() string {
     	if i < 0 || i >= APIErrorCode(len(_APIErrorCode_index)-1) {
    
  • cmd/api-errors.go+9 0 modified
    @@ -28,6 +28,7 @@ import (
     	"strings"
     
     	"github.com/Azure/azure-storage-blob-go/azblob"
    +	"github.com/minio/minio/internal/ioutil"
     	"google.golang.org/api/googleapi"
     
     	"github.com/minio/madmin-go/v2"
    @@ -199,6 +200,7 @@ const (
     	ErrInvalidTagDirective
     	ErrPolicyAlreadyAttached
     	ErrPolicyNotAttached
    +	ErrExcessData
     	// Add new error codes here.
     
     	// SSE-S3/SSE-KMS related API errors
    @@ -527,6 +529,11 @@ var errorCodes = errorCodeMap{
     		Description:    "Your proposed upload exceeds the maximum allowed object size.",
     		HTTPStatusCode: http.StatusBadRequest,
     	},
    +	ErrExcessData: {
    +		Code:           "ExcessData",
    +		Description:    "More data provided than indicated content length",
    +		HTTPStatusCode: http.StatusBadRequest,
    +	},
     	ErrPolicyTooLarge: {
     		Code:           "PolicyTooLarge",
     		Description:    "Policy exceeds the maximum allowed document size.",
    @@ -2099,6 +2106,8 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
     		apiErr = ErrMalformedXML
     	case errInvalidMaxParts:
     		apiErr = ErrInvalidMaxParts
    +	case ioutil.ErrOverread:
    +		apiErr = ErrExcessData
     	}
     
     	// Compression errors
    
  • cmd/auth-handler.go+39 12 modified
    @@ -88,6 +88,18 @@ func isRequestSignStreamingV4(r *http.Request) bool {
     		r.Method == http.MethodPut
     }
     
    +// Verify if the request has AWS Streaming Signature Version '4'. This is only valid for 'PUT' operation.
    +func isRequestSignStreamingTrailerV4(r *http.Request) bool {
    +	return r.Header.Get(xhttp.AmzContentSha256) == streamingContentSHA256Trailer &&
    +		r.Method == http.MethodPut
    +}
    +
    +// Verify if the request has AWS Streaming Signature Version '4', with unsigned content and trailer.
    +func isRequestUnsignedTrailerV4(r *http.Request) bool {
    +	return r.Header.Get(xhttp.AmzContentSha256) == unsignedPayloadTrailer &&
    +		r.Method == http.MethodPut && strings.Contains(r.Header.Get(xhttp.ContentEncoding), streamingContentEncoding)
    +}
    +
     // Authorization type.
     //
     //go:generate stringer -type=authType -trimprefix=authType $GOFILE
    @@ -105,10 +117,12 @@ const (
     	authTypeSignedV2
     	authTypeJWT
     	authTypeSTS
    +	authTypeStreamingSignedTrailer
    +	authTypeStreamingUnsignedTrailer
     )
     
     // Get request authentication type.
    -func getRequestAuthType(r *http.Request) authType {
    +func getRequestAuthType(r *http.Request) (at authType) {
     	if r.URL != nil {
     		var err error
     		r.Form, err = url.ParseQuery(r.URL.RawQuery)
    @@ -123,6 +137,10 @@ func getRequestAuthType(r *http.Request) authType {
     		return authTypePresignedV2
     	} else if isRequestSignStreamingV4(r) {
     		return authTypeStreamingSigned
    +	} else if isRequestSignStreamingTrailerV4(r) {
    +		return authTypeStreamingSignedTrailer
    +	} else if isRequestUnsignedTrailerV4(r) {
    +		return authTypeStreamingUnsignedTrailer
     	} else if isRequestSignatureV4(r) {
     		return authTypeSigned
     	} else if isRequestPresignedSignatureV4(r) {
    @@ -560,13 +578,15 @@ func isReqAuthenticated(ctx context.Context, r *http.Request, region string, sty
     
     // List of all support S3 auth types.
     var supportedS3AuthTypes = map[authType]struct{}{
    -	authTypeAnonymous:       {},
    -	authTypePresigned:       {},
    -	authTypePresignedV2:     {},
    -	authTypeSigned:          {},
    -	authTypeSignedV2:        {},
    -	authTypePostPolicy:      {},
    -	authTypeStreamingSigned: {},
    +	authTypeAnonymous:                {},
    +	authTypePresigned:                {},
    +	authTypePresignedV2:              {},
    +	authTypeSigned:                   {},
    +	authTypeSignedV2:                 {},
    +	authTypePostPolicy:               {},
    +	authTypeStreamingSigned:          {},
    +	authTypeStreamingSignedTrailer:   {},
    +	authTypeStreamingUnsignedTrailer: {},
     }
     
     // Validate if the authType is valid and supported.
    @@ -582,7 +602,8 @@ func setAuthHandler(h http.Handler) http.Handler {
     		tc, ok := r.Context().Value(mcontext.ContextTraceKey).(*mcontext.TraceCtxt)
     
     		aType := getRequestAuthType(r)
    -		if aType == authTypeSigned || aType == authTypeSignedV2 || aType == authTypeStreamingSigned {
    +		switch aType {
    +		case authTypeSigned, authTypeSignedV2, authTypeStreamingSigned, authTypeStreamingSignedTrailer:
     			// Verify if date headers are set, if not reject the request
     			amzDate, errCode := parseAmzDateHeader(r)
     			if errCode != ErrNone {
    @@ -613,10 +634,16 @@ func setAuthHandler(h http.Handler) http.Handler {
     				atomic.AddUint64(&globalHTTPStats.rejectedRequestsTime, 1)
     				return
     			}
    -		}
    -		if isSupportedS3AuthType(aType) || aType == authTypeJWT || aType == authTypeSTS {
     			h.ServeHTTP(w, r)
     			return
    +		case authTypeJWT, authTypeSTS:
    +			h.ServeHTTP(w, r)
    +			return
    +		default:
    +			if isSupportedS3AuthType(aType) {
    +				h.ServeHTTP(w, r)
    +				return
    +			}
     		}
     
     		if ok {
    @@ -710,7 +737,7 @@ func isPutActionAllowed(ctx context.Context, atype authType, bucketName, objectN
     		return ErrSignatureVersionNotSupported
     	case authTypeSignedV2, authTypePresignedV2:
     		cred, owner, s3Err = getReqAccessKeyV2(r)
    -	case authTypeStreamingSigned, authTypePresigned, authTypeSigned:
    +	case authTypeStreamingSigned, authTypePresigned, authTypeSigned, authTypeStreamingSignedTrailer, authTypeStreamingUnsignedTrailer:
     		cred, owner, s3Err = getReqAccessKeyV4(r, region, serviceS3)
     	}
     	if s3Err != ErrNone {
    
  • cmd/authtype_string.go+4 2 modified
    @@ -18,11 +18,13 @@ func _() {
     	_ = x[authTypeSignedV2-7]
     	_ = x[authTypeJWT-8]
     	_ = x[authTypeSTS-9]
    +	_ = x[authTypeStreamingSignedTrailer-10]
    +	_ = x[authTypeStreamingUnsignedTrailer-11]
     }
     
    -const _authType_name = "UnknownAnonymousPresignedPresignedV2PostPolicyStreamingSignedSignedSignedV2JWTSTS"
    +const _authType_name = "UnknownAnonymousPresignedPresignedV2PostPolicyStreamingSignedSignedSignedV2JWTSTSStreamingSignedTrailerStreamingUnsignedTrailer"
     
    -var _authType_index = [...]uint8{0, 7, 16, 25, 36, 46, 61, 67, 75, 78, 81}
    +var _authType_index = [...]uint8{0, 7, 16, 25, 36, 46, 61, 67, 75, 78, 81, 103, 127}
     
     func (i authType) String() string {
     	if i < 0 || i >= authType(len(_authType_index)-1) {
    
  • cmd/erasure-multipart.go+2 1 modified
    @@ -591,7 +591,7 @@ func (er erasureObjects) PutObjectPart(ctx context.Context, bucket, object, uplo
     			return pi, InvalidArgument{
     				Bucket: bucket,
     				Object: fi.Name,
    -				Err:    fmt.Errorf("checksum missing, want %s, got %s", cs, r.ContentCRCType().String()),
    +				Err:    fmt.Errorf("checksum missing, want %q, got %q", cs, r.ContentCRCType().String()),
     			}
     		}
     	}
    @@ -707,6 +707,7 @@ func (er erasureObjects) PutObjectPart(ctx context.Context, bucket, object, uplo
     		Index:      index,
     		Checksums:  r.ContentCRC(),
     	}
    +
     	fi.Parts = []ObjectPartInfo{partInfo}
     	partFI, err := fi.MarshalMsg(nil)
     	if err != nil {
    
  • cmd/object-api-multipart_test.go+2 1 modified
    @@ -29,6 +29,7 @@ import (
     	"github.com/dustin/go-humanize"
     	"github.com/minio/minio/internal/config/storageclass"
     	"github.com/minio/minio/internal/hash"
    +	"github.com/minio/minio/internal/ioutil"
     )
     
     // Wrapper for calling NewMultipartUpload tests for both Erasure multiple disks and single node setup.
    @@ -277,7 +278,7 @@ func testObjectAPIPutObjectPart(obj ObjectLayer, instanceType string, t TestErrH
     		// Input with size less than the size of actual data inside the reader.
     		{
     			bucketName: bucket, objName: object, uploadID: uploadID, PartID: 1, inputReaderData: "abcd", inputMd5: "900150983cd24fb0d6963f7d28e17f73", intputDataSize: int64(len("abcd") - 1),
    -			expectedError: hash.BadDigest{ExpectedMD5: "900150983cd24fb0d6963f7d28e17f73", CalculatedMD5: "900150983cd24fb0d6963f7d28e17f72"},
    +			expectedError: ioutil.ErrOverread,
     		},
     
     		// Test case - 16-19.
    
  • cmd/object-api-putobject_test.go+9 8 modified
    @@ -29,6 +29,7 @@ import (
     
     	"github.com/dustin/go-humanize"
     	"github.com/minio/minio/internal/hash"
    +	"github.com/minio/minio/internal/ioutil"
     )
     
     func md5Header(data []byte) map[string]string {
    @@ -123,7 +124,7 @@ func testObjectAPIPutObject(obj ObjectLayer, instanceType string, t TestErrHandl
     		9: {
     			bucketName: bucket, objName: object, inputData: []byte("abcd"),
     			inputMeta: map[string]string{"etag": "900150983cd24fb0d6963f7d28e17f73"}, intputDataSize: int64(len("abcd") - 1),
    -			expectedError: hash.BadDigest{ExpectedMD5: "900150983cd24fb0d6963f7d28e17f73", CalculatedMD5: "900150983cd24fb0d6963f7d28e17f72"},
    +			expectedError: ioutil.ErrOverread,
     		},
     
     		// Validating for success cases.
    @@ -162,9 +163,9 @@ func testObjectAPIPutObject(obj ObjectLayer, instanceType string, t TestErrHandl
     		},
     
     		// data with size different from the actual number of bytes available in the reader
    -		26: {bucketName: bucket, objName: object, inputData: data, intputDataSize: int64(len(data) - 1), expectedMd5: getMD5Hash(data[:len(data)-1])},
    +		26: {bucketName: bucket, objName: object, inputData: data, intputDataSize: int64(len(data) - 1), expectedMd5: getMD5Hash(data[:len(data)-1]), expectedError: ioutil.ErrOverread},
     		27: {bucketName: bucket, objName: object, inputData: nilBytes, intputDataSize: int64(len(nilBytes) + 1), expectedMd5: getMD5Hash(nilBytes), expectedError: IncompleteBody{Bucket: bucket, Object: object}},
    -		28: {bucketName: bucket, objName: object, inputData: fiveMBBytes, expectedMd5: getMD5Hash(fiveMBBytes)},
    +		28: {bucketName: bucket, objName: object, inputData: fiveMBBytes, expectedMd5: getMD5Hash(fiveMBBytes), expectedError: ioutil.ErrOverread},
     
     		// valid data with X-Amz-Meta- meta
     		29: {bucketName: bucket, objName: object, inputData: data, inputMeta: map[string]string{"X-Amz-Meta-AppID": "a42"}, intputDataSize: int64(len(data)), expectedMd5: getMD5Hash(data)},
    @@ -173,7 +174,7 @@ func testObjectAPIPutObject(obj ObjectLayer, instanceType string, t TestErrHandl
     		30: {bucketName: bucket, objName: "emptydir/", inputData: []byte{}, expectedMd5: getMD5Hash([]byte{})},
     		// Put an object inside the empty directory
     		31: {bucketName: bucket, objName: "emptydir/" + object, inputData: data, intputDataSize: int64(len(data)), expectedMd5: getMD5Hash(data)},
    -		// Put the empty object with a trailing slash again (refer to Test case 31), this needs to succeed
    +		// Put the empty object with a trailing slash again (refer to Test case 30), this needs to succeed
     		32: {bucketName: bucket, objName: "emptydir/", inputData: []byte{}, expectedMd5: getMD5Hash([]byte{})},
     
     		// With invalid crc32.
    @@ -187,23 +188,23 @@ func testObjectAPIPutObject(obj ObjectLayer, instanceType string, t TestErrHandl
     		in := mustGetPutObjReader(t, bytes.NewReader(testCase.inputData), testCase.intputDataSize, testCase.inputMeta["etag"], testCase.inputSHA256)
     		objInfo, actualErr := obj.PutObject(context.Background(), testCase.bucketName, testCase.objName, in, ObjectOptions{UserDefined: testCase.inputMeta})
     		if actualErr != nil && testCase.expectedError == nil {
    -			t.Errorf("Test %d: %s: Expected to pass, but failed with: error %s.", i+1, instanceType, actualErr.Error())
    +			t.Errorf("Test %d: %s: Expected to pass, but failed with: error %s.", i, instanceType, actualErr.Error())
     			continue
     		}
     		if actualErr == nil && testCase.expectedError != nil {
    -			t.Errorf("Test %d: %s: Expected to fail with error \"%s\", but passed instead.", i+1, instanceType, testCase.expectedError.Error())
    +			t.Errorf("Test %d: %s: Expected to fail with error \"%s\", but passed instead.", i, instanceType, testCase.expectedError.Error())
     			continue
     		}
     		// Failed as expected, but does it fail for the expected reason.
     		if actualErr != nil && actualErr != testCase.expectedError {
    -			t.Errorf("Test %d: %s: Expected to fail with error \"%v\", but instead failed with error \"%v\" instead.", i+1, instanceType, testCase.expectedError, actualErr)
    +			t.Errorf("Test %d: %s: Expected to fail with error \"%v\", but instead failed with error \"%v\" instead.", i, instanceType, testCase.expectedError, actualErr)
     			continue
     		}
     		// Test passes as expected, but the output values are verified for correctness here.
     		if actualErr == nil {
     			// Asserting whether the md5 output is correct.
     			if expectedMD5, ok := testCase.inputMeta["etag"]; ok && expectedMD5 != objInfo.ETag {
    -				t.Errorf("Test %d: %s: Calculated Md5 different from the actual one %s.", i+1, instanceType, objInfo.ETag)
    +				t.Errorf("Test %d: %s: Calculated Md5 different from the actual one %s.", i, instanceType, objInfo.ETag)
     				continue
     			}
     		}
    
  • cmd/object-handlers.go+19 9 modified
    @@ -1615,7 +1615,9 @@ func (api objectAPIHandlers) PutObjectHandler(w http.ResponseWriter, r *http.Req
     	// if Content-Length is unknown/missing, deny the request
     	size := r.ContentLength
     	rAuthType := getRequestAuthType(r)
    -	if rAuthType == authTypeStreamingSigned {
    +	switch rAuthType {
    +	// Check signature types that must have content length
    +	case authTypeStreamingSigned, authTypeStreamingSignedTrailer, authTypeStreamingUnsignedTrailer:
     		if sizeStr, ok := r.Header[xhttp.AmzDecodedContentLength]; ok {
     			if sizeStr[0] == "" {
     				writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMissingContentLength), r.URL)
    @@ -1669,9 +1671,16 @@ func (api objectAPIHandlers) PutObjectHandler(w http.ResponseWriter, r *http.Req
     	}
     
     	switch rAuthType {
    -	case authTypeStreamingSigned:
    +	case authTypeStreamingSigned, authTypeStreamingSignedTrailer:
     		// Initialize stream signature verifier.
    -		reader, s3Err = newSignV4ChunkedReader(r)
    +		reader, s3Err = newSignV4ChunkedReader(r, rAuthType == authTypeStreamingSignedTrailer)
    +		if s3Err != ErrNone {
    +			writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Err), r.URL)
    +			return
    +		}
    +	case authTypeStreamingUnsignedTrailer:
    +		// Initialize stream chunked reader with optional trailers.
    +		reader, s3Err = newUnsignedV4ChunkedReader(r, true)
     		if s3Err != ErrNone {
     			writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Err), r.URL)
     			return
    @@ -1903,7 +1912,6 @@ func (api objectAPIHandlers) PutObjectHandler(w http.ResponseWriter, r *http.Req
     	}
     
     	setPutObjHeaders(w, objInfo, false)
    -	writeSuccessResponseHeadersOnly(w)
     
     	// Notify object created event.
     	evt := eventArgs{
    @@ -1921,15 +1929,17 @@ func (api objectAPIHandlers) PutObjectHandler(w http.ResponseWriter, r *http.Req
     		sendEvent(evt)
     	}
     
    +	// Do not send checksums in events to avoid leaks.
    +	hash.TransferChecksumHeader(w, r)
    +	writeSuccessResponseHeadersOnly(w)
    +
     	// Remove the transitioned object whose object version is being overwritten.
     	if !globalTierConfigMgr.Empty() {
     		// Schedule object for immediate transition if eligible.
     		objInfo.ETag = origETag
     		enqueueTransitionImmediate(objInfo)
     		logger.LogIf(ctx, os.Sweep())
     	}
    -	// Do not send checksums in events to avoid leaks.
    -	hash.TransferChecksumHeader(w, r)
     }
     
     // PutObjectExtractHandler - PUT Object extract is an extended API
    @@ -1983,7 +1993,7 @@ func (api objectAPIHandlers) PutObjectExtractHandler(w http.ResponseWriter, r *h
     	// if Content-Length is unknown/missing, deny the request
     	size := r.ContentLength
     	rAuthType := getRequestAuthType(r)
    -	if rAuthType == authTypeStreamingSigned {
    +	if rAuthType == authTypeStreamingSigned || rAuthType == authTypeStreamingSignedTrailer {
     		if sizeStr, ok := r.Header[xhttp.AmzDecodedContentLength]; ok {
     			if sizeStr[0] == "" {
     				writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMissingContentLength), r.URL)
    @@ -2023,9 +2033,9 @@ func (api objectAPIHandlers) PutObjectExtractHandler(w http.ResponseWriter, r *h
     	}
     
     	switch rAuthType {
    -	case authTypeStreamingSigned:
    +	case authTypeStreamingSigned, authTypeStreamingSignedTrailer:
     		// Initialize stream signature verifier.
    -		reader, s3Err = newSignV4ChunkedReader(r)
    +		reader, s3Err = newSignV4ChunkedReader(r, rAuthType == authTypeStreamingSignedTrailer)
     		if s3Err != ErrNone {
     			writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Err), r.URL)
     			return
    
  • cmd/object-handlers_test.go+2 1 modified
    @@ -1100,14 +1100,15 @@ func testAPIPutObjectStreamSigV4Handler(obj ObjectLayer, instanceType, bucketNam
     		},
     		// Test case - 7
     		// Chunk with malformed encoding.
    +		// Causes signature mismatch.
     		{
     			bucketName:         bucketName,
     			objectName:         objectName,
     			data:               oneKData,
     			dataLen:            1024,
     			chunkSize:          1024,
     			expectedContent:    []byte{},
    -			expectedRespStatus: http.StatusBadRequest,
    +			expectedRespStatus: http.StatusForbidden,
     			accessKey:          credentials.AccessKey,
     			secretKey:          credentials.SecretKey,
     			shouldPass:         false,
    
  • cmd/object-multipart-handlers.go+14 4 modified
    @@ -590,7 +590,9 @@ func (api objectAPIHandlers) PutObjectPartHandler(w http.ResponseWriter, r *http
     
     	rAuthType := getRequestAuthType(r)
     	// For auth type streaming signature, we need to gather a different content length.
    -	if rAuthType == authTypeStreamingSigned {
    +	switch rAuthType {
    +	// Check signature types that must have content length
    +	case authTypeStreamingSigned, authTypeStreamingSignedTrailer, authTypeStreamingUnsignedTrailer:
     		if sizeStr, ok := r.Header[xhttp.AmzDecodedContentLength]; ok {
     			if sizeStr[0] == "" {
     				writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMissingContentLength), r.URL)
    @@ -603,6 +605,7 @@ func (api objectAPIHandlers) PutObjectPartHandler(w http.ResponseWriter, r *http
     			}
     		}
     	}
    +
     	if size == -1 {
     		writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMissingContentLength), r.URL)
     		return
    @@ -641,9 +644,16 @@ func (api objectAPIHandlers) PutObjectPartHandler(w http.ResponseWriter, r *http
     	}
     
     	switch rAuthType {
    -	case authTypeStreamingSigned:
    +	case authTypeStreamingSigned, authTypeStreamingSignedTrailer:
     		// Initialize stream signature verifier.
    -		reader, s3Error = newSignV4ChunkedReader(r)
    +		reader, s3Error = newSignV4ChunkedReader(r, rAuthType == authTypeStreamingSignedTrailer)
    +		if s3Error != ErrNone {
    +			writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
    +			return
    +		}
    +	case authTypeStreamingUnsignedTrailer:
    +		// Initialize stream signature verifier.
    +		reader, s3Error = newUnsignedV4ChunkedReader(r, true)
     		if s3Error != ErrNone {
     			writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
     			return
    @@ -689,7 +699,6 @@ func (api objectAPIHandlers) PutObjectPartHandler(w http.ResponseWriter, r *http
     
     	// Read compression metadata preserved in the init multipart for the decision.
     	_, isCompressed := mi.UserDefined[ReservedMetadataPrefix+"compression"]
    -
     	var idxCb func() []byte
     	if isCompressed {
     		actualReader, err := hash.NewReader(reader, size, md5hex, sha256hex, actualSize)
    @@ -718,6 +727,7 @@ func (api objectAPIHandlers) PutObjectPartHandler(w http.ResponseWriter, r *http
     		writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
     		return
     	}
    +
     	if err := hashReader.AddChecksum(r, size < 0); err != nil {
     		writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrInvalidChecksum), r.URL)
     		return
    
  • cmd/signature-v4-utils.go+11 8 modified
    @@ -1,4 +1,4 @@
    -// Copyright (c) 2015-2021 MinIO, Inc.
    +// Copyright (c) 2015-2023 MinIO, Inc.
     //
     // This file is part of MinIO Object Storage stack
     //
    @@ -37,6 +37,10 @@ import (
     // client did not calculate sha256 of the payload.
     const unsignedPayload = "UNSIGNED-PAYLOAD"
     
    +// http Header "x-amz-content-sha256" == "STREAMING-UNSIGNED-PAYLOAD-TRAILER" indicates that the
    +// client did not calculate sha256 of the payload and there is a trailer.
    +const unsignedPayloadTrailer = "STREAMING-UNSIGNED-PAYLOAD-TRAILER"
    +
     // skipContentSha256Cksum returns true if caller needs to skip
     // payload checksum, false if not.
     func skipContentSha256Cksum(r *http.Request) bool {
    @@ -62,20 +66,19 @@ func skipContentSha256Cksum(r *http.Request) bool {
     	// If x-amz-content-sha256 is set and the value is not
     	// 'UNSIGNED-PAYLOAD' we should validate the content sha256.
     	switch v[0] {
    -	case unsignedPayload:
    +	case unsignedPayload, unsignedPayloadTrailer:
     		return true
     	case emptySHA256:
     		// some broken clients set empty-sha256
     		// with > 0 content-length in the body,
     		// we should skip such clients and allow
     		// blindly such insecure clients only if
     		// S3 strict compatibility is disabled.
    -		if r.ContentLength > 0 && !globalCLIContext.StrictS3Compat {
    -			// We return true only in situations when
    -			// deployment has asked MinIO to allow for
    -			// such broken clients and content-length > 0.
    -			return true
    -		}
    +
    +		// We return true only in situations when
    +		// deployment has asked MinIO to allow for
    +		// such broken clients and content-length > 0.
    +		return r.ContentLength > 0 && !globalCLIContext.StrictS3Compat
     	}
     	return false
     }
    
  • cmd/streaming-signature-v4.go+226 30 modified
    @@ -24,9 +24,11 @@ import (
     	"bytes"
     	"encoding/hex"
     	"errors"
    +	"fmt"
     	"hash"
     	"io"
     	"net/http"
    +	"strings"
     	"time"
     
     	"github.com/dustin/go-humanize"
    @@ -37,24 +39,53 @@ import (
     
     // Streaming AWS Signature Version '4' constants.
     const (
    -	emptySHA256              = "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
    -	streamingContentSHA256   = "STREAMING-AWS4-HMAC-SHA256-PAYLOAD"
    -	signV4ChunkedAlgorithm   = "AWS4-HMAC-SHA256-PAYLOAD"
    -	streamingContentEncoding = "aws-chunked"
    +	emptySHA256                   = "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
    +	streamingContentSHA256        = "STREAMING-AWS4-HMAC-SHA256-PAYLOAD"
    +	streamingContentSHA256Trailer = "STREAMING-AWS4-HMAC-SHA256-PAYLOAD-TRAILER"
    +	signV4ChunkedAlgorithm        = "AWS4-HMAC-SHA256-PAYLOAD"
    +	signV4ChunkedAlgorithmTrailer = "AWS4-HMAC-SHA256-TRAILER"
    +	streamingContentEncoding      = "aws-chunked"
    +	awsTrailerHeader              = "X-Amz-Trailer"
    +	trailerKVSeparator            = ":"
     )
     
     // getChunkSignature - get chunk signature.
    -func getChunkSignature(cred auth.Credentials, seedSignature string, region string, date time.Time, hashedChunk string) string {
    +// Does not update anything in cr.
    +func (cr *s3ChunkedReader) getChunkSignature() string {
    +	hashedChunk := hex.EncodeToString(cr.chunkSHA256Writer.Sum(nil))
    +
     	// Calculate string to sign.
    -	stringToSign := signV4ChunkedAlgorithm + "\n" +
    -		date.Format(iso8601Format) + "\n" +
    -		getScope(date, region) + "\n" +
    -		seedSignature + "\n" +
    +	alg := signV4ChunkedAlgorithm + "\n"
    +	stringToSign := alg +
    +		cr.seedDate.Format(iso8601Format) + "\n" +
    +		getScope(cr.seedDate, cr.region) + "\n" +
    +		cr.seedSignature + "\n" +
     		emptySHA256 + "\n" +
     		hashedChunk
     
     	// Get hmac signing key.
    -	signingKey := getSigningKey(cred.SecretKey, date, region, serviceS3)
    +	signingKey := getSigningKey(cr.cred.SecretKey, cr.seedDate, cr.region, serviceS3)
    +
    +	// Calculate signature.
    +	newSignature := getSignature(signingKey, stringToSign)
    +
    +	return newSignature
    +}
    +
    +// getTrailerChunkSignature - get trailer chunk signature.
    +func (cr *s3ChunkedReader) getTrailerChunkSignature() string {
    +	hashedChunk := hex.EncodeToString(cr.chunkSHA256Writer.Sum(nil))
    +
    +	// Calculate string to sign.
    +	alg := signV4ChunkedAlgorithmTrailer + "\n"
    +	stringToSign := alg +
    +		cr.seedDate.Format(iso8601Format) + "\n" +
    +		getScope(cr.seedDate, cr.region) + "\n" +
    +		cr.seedSignature + "\n" +
    +		hashedChunk
    +
    +	// Get hmac signing key.
    +	signingKey := getSigningKey(cr.cred.SecretKey, cr.seedDate, cr.region, serviceS3)
     
     	// Calculate signature.
     	newSignature := getSignature(signingKey, stringToSign)
    @@ -67,7 +98,7 @@ func getChunkSignature(cred auth.Credentials, seedSignature string, region strin
     //
     // returns signature, error otherwise if the signature mismatches or any other
     // error while parsing and validating.
    -func calculateSeedSignature(r *http.Request) (cred auth.Credentials, signature string, region string, date time.Time, errCode APIErrorCode) {
    +func calculateSeedSignature(r *http.Request, trailers bool) (cred auth.Credentials, signature string, region string, date time.Time, errCode APIErrorCode) {
     	// Copy request.
     	req := *r
     
    @@ -82,6 +113,9 @@ func calculateSeedSignature(r *http.Request) (cred auth.Credentials, signature s
     
     	// Payload streaming.
     	payload := streamingContentSHA256
    +	if trailers {
    +		payload = streamingContentSHA256Trailer
    +	}
     
     	// Payload for STREAMING signature should be 'STREAMING-AWS4-HMAC-SHA256-PAYLOAD'
     	if payload != req.Header.Get(xhttp.AmzContentSha256) {
    @@ -158,20 +192,32 @@ var errChunkTooBig = errors.New("chunk too big: choose chunk size <= 16MiB")
     //
     // NewChunkedReader is not needed by normal applications. The http package
     // automatically decodes chunking when reading response bodies.
    -func newSignV4ChunkedReader(req *http.Request) (io.ReadCloser, APIErrorCode) {
    -	cred, seedSignature, region, seedDate, errCode := calculateSeedSignature(req)
    +func newSignV4ChunkedReader(req *http.Request, trailer bool) (io.ReadCloser, APIErrorCode) {
    +	cred, seedSignature, region, seedDate, errCode := calculateSeedSignature(req, trailer)
     	if errCode != ErrNone {
     		return nil, errCode
     	}
     
    +	if trailer {
    +		// Discard anything unsigned.
    +		req.Trailer = make(http.Header)
    +		trailers := req.Header.Values(awsTrailerHeader)
    +		for _, key := range trailers {
    +			req.Trailer.Add(key, "")
    +		}
    +	} else {
    +		req.Trailer = nil
    +	}
     	return &s3ChunkedReader{
    +		trailers:          req.Trailer,
     		reader:            bufio.NewReader(req.Body),
     		cred:              cred,
     		seedSignature:     seedSignature,
     		seedDate:          seedDate,
     		region:            region,
     		chunkSHA256Writer: sha256.New(),
     		buffer:            make([]byte, 64*1024),
    +		debug:             false,
     	}, ErrNone
     }
     
    @@ -183,11 +229,13 @@ type s3ChunkedReader struct {
     	seedSignature string
     	seedDate      time.Time
     	region        string
    +	trailers      http.Header
     
     	chunkSHA256Writer hash.Hash // Calculates sha256 of chunk data.
     	buffer            []byte
     	offset            int
     	err               error
    +	debug             bool // Print details on failure. Add your own if more are needed.
     }
     
     func (cr *s3ChunkedReader) Close() (err error) {
    @@ -214,6 +262,19 @@ const maxChunkSize = 16 << 20 // 16 MiB
     // Read - implements `io.Reader`, which transparently decodes
     // the incoming AWS Signature V4 streaming signature.
     func (cr *s3ChunkedReader) Read(buf []byte) (n int, err error) {
    +	if cr.err != nil {
    +		if cr.debug {
    +			fmt.Printf("s3ChunkedReader: Returning err: %v (%T)\n", cr.err, cr.err)
    +		}
    +		return 0, cr.err
    +	}
    +	defer func() {
    +		if err != nil && err != io.EOF {
    +			if cr.debug {
    +				fmt.Println("Read err:", err)
    +			}
    +		}
    +	}()
     	// First, if there is any unread data, copy it to the client
     	// provided buffer.
     	if cr.offset > 0 {
    @@ -319,8 +380,43 @@ func (cr *s3ChunkedReader) Read(buf []byte) (n int, err error) {
     		cr.err = err
     		return n, cr.err
     	}
    +
    +	// Once we have read the entire chunk successfully, we verify
    +	// that the received signature matches our computed signature.
    +	cr.chunkSHA256Writer.Write(cr.buffer)
    +	newSignature := cr.getChunkSignature()
    +	if !compareSignatureV4(string(signature[16:]), newSignature) {
    +		cr.err = errSignatureMismatch
    +		return n, cr.err
    +	}
    +	cr.seedSignature = newSignature
    +	cr.chunkSHA256Writer.Reset()
    +
    +	// If the chunk size is zero we return io.EOF. As specified by AWS,
    +	// only the last chunk is zero-sized.
    +	if len(cr.buffer) == 0 {
    +		if cr.debug {
    +			fmt.Println("EOF. Reading Trailers:", cr.trailers)
    +		}
    +		if cr.trailers != nil {
    +			err = cr.readTrailers()
    +			if cr.debug {
    +				fmt.Println("trailers returned:", err, "now:", cr.trailers)
    +			}
    +			if err != nil {
    +				cr.err = err
    +				return 0, err
    +			}
    +		}
    +		cr.err = io.EOF
    +		return n, cr.err
    +	}
    +
     	b, err = cr.reader.ReadByte()
     	if b != '\r' || err != nil {
    +		if cr.debug {
    +			fmt.Printf("want %q, got %q\n", "\r", string(b))
    +		}
     		cr.err = errMalformedEncoding
     		return n, cr.err
     	}
    @@ -333,31 +429,131 @@ func (cr *s3ChunkedReader) Read(buf []byte) (n int, err error) {
     		return n, cr.err
     	}
     	if b != '\n' {
    +		if cr.debug {
    +			fmt.Printf("want %q, got %q\n", "\r", string(b))
    +		}
     		cr.err = errMalformedEncoding
     		return n, cr.err
     	}
     
    -	// Once we have read the entire chunk successfully, we verify
    -	// that the received signature matches our computed signature.
    -	cr.chunkSHA256Writer.Write(cr.buffer)
    -	newSignature := getChunkSignature(cr.cred, cr.seedSignature, cr.region, cr.seedDate, hex.EncodeToString(cr.chunkSHA256Writer.Sum(nil)))
    -	if !compareSignatureV4(string(signature[16:]), newSignature) {
    -		cr.err = errSignatureMismatch
    -		return n, cr.err
    +	cr.offset = copy(buf, cr.buffer)
    +	n += cr.offset
    +	return n, err
    +}
    +
    +// readTrailers will read all trailers and populate cr.trailers with actual values.
    +func (cr *s3ChunkedReader) readTrailers() error {
    +	var valueBuffer bytes.Buffer
    +	// Read value
    +	for {
    +		v, err := cr.reader.ReadByte()
    +		if err != nil {
    +			if err == io.EOF {
    +				return io.ErrUnexpectedEOF
    +			}
    +		}
    +		if v != '\r' {
    +			valueBuffer.WriteByte(v)
    +			continue
    +		}
    +		// End of buffer, do not add to value.
    +		v, err = cr.reader.ReadByte()
    +		if err != nil {
    +			if err == io.EOF {
    +				return io.ErrUnexpectedEOF
    +			}
    +		}
    +		if v != '\n' {
    +			return errMalformedEncoding
    +		}
    +		break
     	}
    -	cr.seedSignature = newSignature
    -	cr.chunkSHA256Writer.Reset()
     
    -	// If the chunk size is zero we return io.EOF. As specified by AWS,
    -	// only the last chunk is zero-sized.
    -	if size == 0 {
    -		cr.err = io.EOF
    -		return n, cr.err
    +	// Read signature
    +	var signatureBuffer bytes.Buffer
    +	for {
    +		v, err := cr.reader.ReadByte()
    +		if err != nil {
    +			if err == io.EOF {
    +				return io.ErrUnexpectedEOF
    +			}
    +		}
    +		if v != '\r' {
    +			signatureBuffer.WriteByte(v)
    +			continue
    +		}
    +		var tmp [3]byte
    +		_, err = io.ReadFull(cr.reader, tmp[:])
    +		if err != nil {
    +			if err == io.EOF {
    +				return io.ErrUnexpectedEOF
    +			}
    +		}
    +		if string(tmp[:]) != "\n\r\n" {
    +			if cr.debug {
    +				fmt.Printf("signature, want %q, got %q", "\n\r\n", string(tmp[:]))
    +			}
    +			return errMalformedEncoding
    +		}
    +		// No need to write final newlines to buffer.
    +		break
     	}
     
    -	cr.offset = copy(buf, cr.buffer)
    -	n += cr.offset
    -	return n, err
    +	// Verify signature.
    +	sig := signatureBuffer.Bytes()
    +	if !bytes.HasPrefix(sig, []byte("x-amz-trailer-signature:")) {
    +		if cr.debug {
    +			fmt.Printf("prefix, want prefix %q, got %q", "x-amz-trailer-signature:", string(sig))
    +		}
    +		return errMalformedEncoding
    +	}
    +	sig = sig[len("x-amz-trailer-signature:"):]
    +	sig = bytes.TrimSpace(sig)
    +	cr.chunkSHA256Writer.Write(valueBuffer.Bytes())
    +	wantSig := cr.getTrailerChunkSignature()
    +	if !compareSignatureV4(string(sig), wantSig) {
    +		if cr.debug {
    +			fmt.Printf("signature, want: %q, got %q\nSignature buffer: %q\n", wantSig, string(sig), string(valueBuffer.Bytes()))
    +		}
    +		return errSignatureMismatch
    +	}
    +
    +	// Parse trailers.
    +	wantTrailers := make(map[string]struct{}, len(cr.trailers))
    +	for k := range cr.trailers {
    +		wantTrailers[strings.ToLower(k)] = struct{}{}
    +	}
    +	input := bufio.NewScanner(bytes.NewReader(valueBuffer.Bytes()))
    +	for input.Scan() {
    +		line := strings.TrimSpace(input.Text())
    +		if line == "" {
    +			continue
    +		}
    +		// Find first separator.
    +		idx := strings.IndexByte(line, trailerKVSeparator[0])
    +		if idx <= 0 || idx >= len(line) {
    +			if cr.debug {
    +				fmt.Printf("index, ':' not found in %q\n", line)
    +			}
    +			return errMalformedEncoding
    +		}
    +		key := line[:idx]
    +		value := line[idx+1:]
    +		if _, ok := wantTrailers[key]; !ok {
    +			if cr.debug {
    +				fmt.Printf("%q not found in %q\n", key, cr.trailers)
    +			}
    +			return errMalformedEncoding
    +		}
    +		cr.trailers.Set(key, value)
    +		delete(wantTrailers, key)
    +	}
    +
    +	// Check if we got all we want.
    +	if len(wantTrailers) > 0 {
    +		return io.ErrUnexpectedEOF
    +	}
    +	return nil
     }
     
     // readCRLF - check if reader only has '\r\n' CRLF character.
    
  • cmd/streaming-v4-unsigned.go+257 0 added
    @@ -0,0 +1,257 @@
    +// Copyright (c) 2015-2023 MinIO, Inc.
    +//
    +// This file is part of MinIO Object Storage stack
    +//
    +// This program is free software: you can redistribute it and/or modify
    +// it under the terms of the GNU Affero General Public License as published by
    +// the Free Software Foundation, either version 3 of the License, or
    +// (at your option) any later version.
    +//
    +// This program is distributed in the hope that it will be useful
    +// but WITHOUT ANY WARRANTY; without even the implied warranty of
    +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    +// GNU Affero General Public License for more details.
    +//
    +// You should have received a copy of the GNU Affero General Public License
    +// along with this program.  If not, see <http://www.gnu.org/licenses/>.
    +
    +package cmd
    +
    +import (
    +	"bufio"
    +	"bytes"
    +	"fmt"
    +	"io"
    +	"net/http"
    +	"strings"
    +)
    +
    +// newUnsignedV4ChunkedReader returns a new s3UnsignedChunkedReader that translates the data read from r
    +// out of HTTP "chunked" format before returning it.
    +// The s3ChunkedReader returns io.EOF when the final 0-length chunk is read.
    +func newUnsignedV4ChunkedReader(req *http.Request, trailer bool) (io.ReadCloser, APIErrorCode) {
    +	if trailer {
    +		// Discard anything unsigned.
    +		req.Trailer = make(http.Header)
    +		trailers := req.Header.Values(awsTrailerHeader)
    +		for _, key := range trailers {
    +			req.Trailer.Add(key, "")
    +		}
    +	} else {
    +		req.Trailer = nil
    +	}
    +	return &s3UnsignedChunkedReader{
    +		trailers: req.Trailer,
    +		reader:   bufio.NewReader(req.Body),
    +		buffer:   make([]byte, 64*1024),
    +	}, ErrNone
    +}
    +
    +// Represents the overall state that is required for decoding a
    +// AWS Signature V4 chunked reader.
    +type s3UnsignedChunkedReader struct {
    +	reader   *bufio.Reader
    +	trailers http.Header
    +
    +	buffer []byte
    +	offset int
    +	err    error
    +	debug  bool
    +}
    +
    +func (cr *s3UnsignedChunkedReader) Close() (err error) {
    +	return cr.err
    +}
    +
    +// Read - implements `io.Reader`, which transparently decodes
    +// the incoming AWS Signature V4 streaming signature.
    +func (cr *s3UnsignedChunkedReader) Read(buf []byte) (n int, err error) {
    +	// First, if there is any unread data, copy it to the client
    +	// provided buffer.
    +	if cr.offset > 0 {
    +		n = copy(buf, cr.buffer[cr.offset:])
    +		if n == len(buf) {
    +			cr.offset += n
    +			return n, nil
    +		}
    +		cr.offset = 0
    +		buf = buf[n:]
    +	}
    +	// mustRead reads from input and compares against provided slice.
    +	mustRead := func(b ...byte) error {
    +		for _, want := range b {
    +			got, err := cr.reader.ReadByte()
    +			if err == io.EOF {
    +				return io.ErrUnexpectedEOF
    +			}
    +			if got != want {
    +				if cr.debug {
    +					fmt.Printf("mustread: want: %q got: %q\n", string(want), string(got))
    +				}
    +				return errMalformedEncoding
    +			}
    +			if err != nil {
    +				return err
    +			}
    +		}
    +		return nil
    +	}
    +	var size int
    +	for {
    +		b, err := cr.reader.ReadByte()
    +		if err == io.EOF {
    +			err = io.ErrUnexpectedEOF
    +		}
    +		if err != nil {
    +			cr.err = err
    +			return n, cr.err
    +		}
    +		if b == '\r' { // \r\n denotes end of size.
    +			err := mustRead('\n')
    +			if err != nil {
    +				cr.err = err
    +				return n, cr.err
    +			}
    +			break
    +		}
    +
    +		// Manually deserialize the size since AWS specified
    +		// the chunk size to be of variable width. In particular,
    +		// a size of 16 is encoded as `10` while a size of 64 KB
    +		// is `10000`.
    +		switch {
    +		case b >= '0' && b <= '9':
    +			size = size<<4 | int(b-'0')
    +		case b >= 'a' && b <= 'f':
    +			size = size<<4 | int(b-('a'-10))
    +		case b >= 'A' && b <= 'F':
    +			size = size<<4 | int(b-('A'-10))
    +		default:
    +			if cr.debug {
    +				fmt.Printf("err size: %v\n", string(b))
    +			}
    +			cr.err = errMalformedEncoding
    +			return n, cr.err
    +		}
    +		if size > maxChunkSize {
    +			cr.err = errChunkTooBig
    +			return n, cr.err
    +		}
    +	}
    +
    +	if cap(cr.buffer) < size {
    +		cr.buffer = make([]byte, size)
    +	} else {
    +		cr.buffer = cr.buffer[:size]
    +	}
    +
    +	// Now, we read the payload.
    +	_, err = io.ReadFull(cr.reader, cr.buffer)
    +	if err == io.EOF && size != 0 {
    +		err = io.ErrUnexpectedEOF
    +	}
    +	if err != nil && err != io.EOF {
    +		cr.err = err
    +		return n, cr.err
    +	}
    +
    +	// If the chunk size is zero we return io.EOF. As specified by AWS,
    +	// only the last chunk is zero-sized.
    +	if len(cr.buffer) == 0 {
    +		if cr.debug {
    +			fmt.Println("EOF")
    +		}
    +		if cr.trailers != nil {
    +			err = cr.readTrailers()
    +			if cr.debug {
    +				fmt.Println("trailer returned:", err)
    +			}
    +			if err != nil {
    +				cr.err = err
    +				return 0, err
    +			}
    +		}
    +		cr.err = io.EOF
    +		return n, cr.err
    +	}
    +	// read final terminator.
    +	err = mustRead('\r', '\n')
    +	if err != nil && err != io.EOF {
    +		cr.err = err
    +		return n, cr.err
    +	}
    +
    +	cr.offset = copy(buf, cr.buffer)
    +	n += cr.offset
    +	return n, err
    +}
    +
    +// readTrailers will read all trailers and populate cr.trailers with actual values.
    +func (cr *s3UnsignedChunkedReader) readTrailers() error {
    +	var valueBuffer bytes.Buffer
    +	// Read value
    +	for {
    +		v, err := cr.reader.ReadByte()
    +		if err != nil {
    +			if err == io.EOF {
    +				return io.ErrUnexpectedEOF
    +			}
    +		}
    +		if v != '\r' {
    +			valueBuffer.WriteByte(v)
    +			continue
    +		}
    +		// Must end with \r\n\r\n
    +		var tmp [3]byte
    +		_, err = io.ReadFull(cr.reader, tmp[:])
    +		if err != nil {
    +			if err == io.EOF {
    +				return io.ErrUnexpectedEOF
    +			}
    +		}
    +		if !bytes.Equal(tmp[:], []byte{'\n', '\r', '\n'}) {
    +			if cr.debug {
    +				fmt.Printf("got %q, want %q\n", string(tmp[:]), "\n\r\n")
    +			}
    +			return errMalformedEncoding
    +		}
    +		break
    +	}
    +
    +	// Parse trailers.
    +	wantTrailers := make(map[string]struct{}, len(cr.trailers))
    +	for k := range cr.trailers {
    +		wantTrailers[strings.ToLower(k)] = struct{}{}
    +	}
    +	input := bufio.NewScanner(bytes.NewReader(valueBuffer.Bytes()))
    +	for input.Scan() {
    +		line := strings.TrimSpace(input.Text())
    +		if line == "" {
    +			continue
    +		}
    +		// Find first separator.
    +		idx := strings.IndexByte(line, trailerKVSeparator[0])
    +		if idx <= 0 || idx >= len(line) {
    +			if cr.debug {
    +				fmt.Printf("Could not find separator, got %q\n", line)
    +			}
    +			return errMalformedEncoding
    +		}
    +		key := strings.ToLower(line[:idx])
    +		value := line[idx+1:]
    +		if _, ok := wantTrailers[key]; !ok {
    +			if cr.debug {
    +				fmt.Printf("Unknown key %q - expected on of %v\n", key, cr.trailers)
    +			}
    +			return errMalformedEncoding
    +		}
    +		cr.trailers.Set(key, value)
    +		delete(wantTrailers, key)
    +	}
    +
    +	// Check if we got all we want.
    +	if len(wantTrailers) > 0 {
    +		return io.ErrUnexpectedEOF
    +	}
    +	return nil
    +}
    
  • internal/hash/checksum.go+43 10 modified
    @@ -303,8 +303,8 @@ func (c Checksum) Valid() bool {
     	if c.Type == ChecksumInvalid {
     		return false
     	}
    -	if len(c.Encoded) == 0 || c.Type.Is(ChecksumTrailing) {
    -		return c.Type.Is(ChecksumNone) || c.Type.Is(ChecksumTrailing)
    +	if len(c.Encoded) == 0 || c.Type.Trailing() {
    +		return c.Type.Is(ChecksumNone) || c.Type.Trailing()
     	}
     	raw := c.Raw
     	return c.Type.RawByteLen() == len(raw)
    @@ -339,10 +339,21 @@ func (c *Checksum) AsMap() map[string]string {
     }
     
     // TransferChecksumHeader will transfer any checksum value that has been checked.
    +// If checksum was trailing, they must have been added to r.Trailer.
     func TransferChecksumHeader(w http.ResponseWriter, r *http.Request) {
    -	t, s := getContentChecksum(r)
    -	if !t.IsSet() || t.Is(ChecksumTrailing) {
    -		// TODO: Add trailing when we can read it.
    +	c, err := GetContentChecksum(r)
    +	if err != nil || c == nil {
    +		return
    +	}
    +	t, s := c.Type, c.Encoded
    +	if !c.Type.IsSet() {
    +		return
    +	}
    +	if c.Type.Is(ChecksumTrailing) {
    +		val := r.Trailer.Get(t.Key())
    +		if val != "" {
    +			w.Header().Set(t.Key(), val)
    +		}
     		return
     	}
     	w.Header().Set(t.Key(), s)
    @@ -365,6 +376,32 @@ func AddChecksumHeader(w http.ResponseWriter, c map[string]string) {
     // Returns ErrInvalidChecksum if so.
     // Returns nil, nil if no checksum.
     func GetContentChecksum(r *http.Request) (*Checksum, error) {
    +	if trailing := r.Header.Values(xhttp.AmzTrailer); len(trailing) > 0 {
    +		var res *Checksum
    +		for _, header := range trailing {
    +			var duplicates bool
    +			switch {
    +			case strings.EqualFold(header, ChecksumCRC32C.Key()):
    +				duplicates = res != nil
    +				res = NewChecksumWithType(ChecksumCRC32C|ChecksumTrailing, "")
    +			case strings.EqualFold(header, ChecksumCRC32.Key()):
    +				duplicates = res != nil
    +				res = NewChecksumWithType(ChecksumCRC32|ChecksumTrailing, "")
    +			case strings.EqualFold(header, ChecksumSHA256.Key()):
    +				duplicates = res != nil
    +				res = NewChecksumWithType(ChecksumSHA256|ChecksumTrailing, "")
    +			case strings.EqualFold(header, ChecksumSHA1.Key()):
    +				duplicates = res != nil
    +				res = NewChecksumWithType(ChecksumSHA1|ChecksumTrailing, "")
    +			}
    +			if duplicates {
    +				return nil, ErrInvalidChecksum
    +			}
    +		}
    +		if res != nil {
    +			return res, nil
    +		}
    +	}
     	t, s := getContentChecksum(r)
     	if t == ChecksumNone {
     		if s == "" {
    @@ -389,11 +426,6 @@ func getContentChecksum(r *http.Request) (t ChecksumType, s string) {
     		if t.IsSet() {
     			hdr := t.Key()
     			if s = r.Header.Get(hdr); s == "" {
    -				if strings.EqualFold(r.Header.Get(xhttp.AmzTrailer), hdr) {
    -					t |= ChecksumTrailing
    -				} else {
    -					t = ChecksumInvalid
    -				}
     				return ChecksumNone, ""
     			}
     		}
    @@ -409,6 +441,7 @@ func getContentChecksum(r *http.Request) (t ChecksumType, s string) {
     				t = c
     				s = got
     			}
    +			return
     		}
     	}
     	checkType(ChecksumCRC32)
    
  • internal/hash/reader.go+22 4 modified
    @@ -28,6 +28,7 @@ import (
     
     	"github.com/minio/minio/internal/etag"
     	"github.com/minio/minio/internal/hash/sha256"
    +	"github.com/minio/minio/internal/ioutil"
     )
     
     // A Reader wraps an io.Reader and computes the MD5 checksum
    @@ -51,6 +52,8 @@ type Reader struct {
     	contentHash   Checksum
     	contentHasher hash.Hash
     
    +	trailer http.Header
    +
     	sha256 hash.Hash
     }
     
    @@ -107,7 +110,7 @@ func NewReader(src io.Reader, size int64, md5Hex, sha256Hex string, actualSize i
     		r.checksum = MD5
     		r.contentSHA256 = SHA256
     		if r.size < 0 && size >= 0 {
    -			r.src = etag.Wrap(io.LimitReader(r.src, size), r.src)
    +			r.src = etag.Wrap(ioutil.HardLimitReader(r.src, size), r.src)
     			r.size = size
     		}
     		if r.actualSize <= 0 && actualSize >= 0 {
    @@ -117,7 +120,7 @@ func NewReader(src io.Reader, size int64, md5Hex, sha256Hex string, actualSize i
     	}
     
     	if size >= 0 {
    -		r := io.LimitReader(src, size)
    +		r := ioutil.HardLimitReader(src, size)
     		if _, ok := src.(etag.Tagger); !ok {
     			src = etag.NewReader(r, MD5)
     		} else {
    @@ -155,10 +158,14 @@ func (r *Reader) AddChecksum(req *http.Request, ignoreValue bool) error {
     		return nil
     	}
     	r.contentHash = *cs
    -	if cs.Type.Trailing() || ignoreValue {
    -		// Ignore until we have trailing headers.
    +	if cs.Type.Trailing() {
    +		r.trailer = req.Trailer
    +	}
    +	if ignoreValue {
    +		// Do not validate, but allow for transfer
     		return nil
     	}
    +
     	r.contentHasher = cs.Type.Hasher()
     	if r.contentHasher == nil {
     		return ErrInvalidChecksum
    @@ -186,6 +193,14 @@ func (r *Reader) Read(p []byte) (int, error) {
     			}
     		}
     		if r.contentHasher != nil {
    +			if r.contentHash.Type.Trailing() {
    +				var err error
    +				r.contentHash.Encoded = r.trailer.Get(r.contentHash.Type.Key())
    +				r.contentHash.Raw, err = base64.StdEncoding.DecodeString(r.contentHash.Encoded)
    +				if err != nil || len(r.contentHash.Raw) == 0 {
    +					return 0, ChecksumMismatch{Got: r.contentHash.Encoded}
    +				}
    +			}
     			if sum := r.contentHasher.Sum(nil); !bytes.Equal(r.contentHash.Raw, sum) {
     				err := ChecksumMismatch{
     					Want: r.contentHash.Encoded,
    @@ -276,6 +291,9 @@ func (r *Reader) ContentCRC() map[string]string {
     	if r.contentHash.Type == ChecksumNone || !r.contentHash.Valid() {
     		return nil
     	}
    +	if r.contentHash.Type.Trailing() {
    +		return map[string]string{r.contentHash.Type.String(): r.trailer.Get(r.contentHash.Type.Key())}
    +	}
     	return map[string]string{r.contentHash.Type.String(): r.contentHash.Encoded}
     }
     
    
  • internal/hash/reader_test.go+14 3 modified
    @@ -23,6 +23,8 @@ import (
     	"fmt"
     	"io"
     	"testing"
    +
    +	"github.com/minio/minio/internal/ioutil"
     )
     
     // Tests functions like Size(), MD5*(), SHA256*()
    @@ -79,7 +81,7 @@ func TestHashReaderVerification(t *testing.T) {
     		md5hex, sha256hex string
     		err               error
     	}{
    -		{
    +		0: {
     			desc:       "Success, no checksum verification provided.",
     			src:        bytes.NewReader([]byte("abcd")),
     			size:       4,
    @@ -124,7 +126,7 @@ func TestHashReaderVerification(t *testing.T) {
     				CalculatedSHA256: "88d4266fd4e6338d13b845fcf289579d209c897823b9217da3e161936f031589",
     			},
     		},
    -		{
    +		5: {
     			desc:       "Correct sha256, nested",
     			src:        mustReader(t, bytes.NewReader([]byte("abcd")), 4, "", "", 4),
     			size:       4,
    @@ -137,13 +139,15 @@ func TestHashReaderVerification(t *testing.T) {
     			size:       4,
     			actualSize: -1,
     			sha256hex:  "88d4266fd4e6338d13b845fcf289579d209c897823b9217da3e161936f031589",
    +			err:        ioutil.ErrOverread,
     		},
    -		{
    +		7: {
     			desc:       "Correct sha256, nested, truncated, swapped",
     			src:        mustReader(t, bytes.NewReader([]byte("abcd-more-stuff-to-be ignored")), 4, "", "", -1),
     			size:       4,
     			actualSize: -1,
     			sha256hex:  "88d4266fd4e6338d13b845fcf289579d209c897823b9217da3e161936f031589",
    +			err:        ioutil.ErrOverread,
     		},
     		{
     			desc:       "Incorrect MD5, nested",
    @@ -162,6 +166,7 @@ func TestHashReaderVerification(t *testing.T) {
     			size:       4,
     			actualSize: 4,
     			sha256hex:  "88d4266fd4e6338d13b845fcf289579d209c897823b9217da3e161936f031589",
    +			err:        ioutil.ErrOverread,
     		},
     		{
     			desc:       "Correct MD5, nested",
    @@ -177,13 +182,15 @@ func TestHashReaderVerification(t *testing.T) {
     			actualSize: 4,
     			sha256hex:  "",
     			md5hex:     "e2fc714c4727ee9395f324cd2e7f331f",
    +			err:        ioutil.ErrOverread,
     		},
     		{
     			desc:       "Correct MD5, nested, truncated",
     			src:        mustReader(t, bytes.NewReader([]byte("abcd-morestuff")), -1, "", "", -1),
     			size:       4,
     			actualSize: 4,
     			md5hex:     "e2fc714c4727ee9395f324cd2e7f331f",
    +			err:        ioutil.ErrOverread,
     		},
     	}
     	for i, testCase := range testCases {
    @@ -194,6 +201,10 @@ func TestHashReaderVerification(t *testing.T) {
     			}
     			_, err = io.Copy(io.Discard, r)
     			if err != nil {
    +				if testCase.err == nil {
    +					t.Errorf("Test %q; got unexpected error: %v", testCase.desc, err)
    +					return
    +				}
     				if err.Error() != testCase.err.Error() {
     					t.Errorf("Test %q: Expected error %s, got error %s", testCase.desc, testCase.err, err)
     				}
    
  • internal/ioutil/hardlimitreader.go+56 0 added
    @@ -0,0 +1,56 @@
    +// Copyright (c) 2015-2023 MinIO, Inc.
    +//
    +// This file is part of MinIO Object Storage stack
    +//
    +// This program is free software: you can redistribute it and/or modify
    +// it under the terms of the GNU Affero General Public License as published by
    +// the Free Software Foundation, either version 3 of the License, or
    +// (at your option) any later version.
    +//
    +// This program is distributed in the hope that it will be useful
    +// but WITHOUT ANY WARRANTY; without even the implied warranty of
    +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    +// GNU Affero General Public License for more details.
    +//
    +// You should have received a copy of the GNU Affero General Public License
    +// along with this program.  If not, see <http://www.gnu.org/licenses/>.
    +
    +// Package ioutil implements some I/O utility functions which are not covered
    +// by the standard library.
    +package ioutil
    +
    +import (
    +	"errors"
    +	"io"
    +)
    +
    +// ErrOverread is returned to the reader when the hard limit of HardLimitReader is exceeded.
    +var ErrOverread = errors.New("input provided more bytes than specified")
    +
    +// HardLimitReader returns a Reader that reads from r
    +// but returns an error if the source provides more data than allowed.
    +// This means the source *will* be overread unless EOF is returned prior.
    +// The underlying implementation is a *HardLimitedReader.
    +// This will ensure that at most n bytes are returned and EOF is reached.
    +func HardLimitReader(r io.Reader, n int64) io.Reader { return &HardLimitedReader{r, n} }
    +
    +// A HardLimitedReader reads from R but limits the amount of
    +// data returned to just N bytes. Each call to Read
    +// updates N to reflect the new amount remaining.
    +// Read returns EOF when N <= 0 or when the underlying R returns EOF.
    +type HardLimitedReader struct {
    +	R io.Reader // underlying reader
    +	N int64     // max bytes remaining
    +}
    +
    +func (l *HardLimitedReader) Read(p []byte) (n int, err error) {
    +	if l.N < 0 {
    +		return 0, ErrOverread
    +	}
    +	n, err = l.R.Read(p)
    +	l.N -= int64(n)
    +	if l.N < 0 {
    +		return 0, ErrOverread
    +	}
    +	return
    +}
    

Vulnerability mechanics

Generated by null/stub on May 9, 2026. Inputs: CWE entries + fix-commit diffs from this CVE's patches. Citations validated against bundle.

References

5

News mentions

0

No linked articles in our index yet.