-
-
Notifications
You must be signed in to change notification settings - Fork 816
Description
Similar to #637, but for generation side, we may want to limit amount of processing based on some criteria.
Compared to input side, constraining generation may be less important for DoS reasons, but there are some aspects that seem like they would benefit from having limits.
First: limiting maximum nesting depth (defaulting to, say, 1000 levels). While this may not be an easy DoS attack vector, it is an accidental "own goal" case where a (relatively) common case where attempts to serialize cyclic data structures may result in StackOverflowError
. While there are possible approaches to preventing this using other mechanisms, capping maximum nesting would be a straight-forward an efficient way to avoid SOE and resulting major resource drainage: instead of having to maintain a partial object graph to look for "back links", we simply keep track of nesting level. This can be configured to value that is high enough not to block typical legit sage, but prevent recursion by serializers to level well before SOE.
Other possible later additions could include:
- Limit on number of Object properties to write per-level -- we keep track of counts in
JsonWriteContext
anyway so could be relatively simple (implemented via Add StreamWriteConstraints with a nesting depth check #1055) - Limit on Object property name length (simple enough to check)
but these are just speculative ones, not requested at this point.
As to possible implementation: this should follow pattern established with #637 adding StreamWriteConstraints
, starting with the first implementation. We probably should allow something similar to #1019 immediately as well (wrt static default override).