Skip to content

Conversation

@dependabot
Copy link

@dependabot dependabot bot commented on behalf of github May 21, 2022

Bumps tensorflow from 2.6.2 to 2.9.0.

Release notes

Sourced from tensorflow's releases.

TensorFlow 2.9.0

Release 2.9.0

Breaking Changes

  • Due to security issues in TF 2.8, all boosted trees code has now been removed (after being deprecated in TF 2.8). Users should switch to TensorFlow Decision Forests.
  • Build, Compilation and Packaging
    • TensorFlow is now compiled with _GLIBCXX_USE_CXX11_ABI=1. Downstream projects that encounter std::__cxx11 or [abi:cxx11] linker errors will need to adopt this compiler option. See the GNU C++ Library docs on Dual ABI.
    • TensorFlow Python wheels now specifically conform to manylinux2014, an upgrade from manylinux2010. The minimum Pip version supporting manylinux2014 is Pip 19.3 (see pypa/manylinux. This change may affect you if you have been using TensorFlow on a very old platform equivalent to CentOS 6, as manylinux2014 targets CentOS 7 as a compatibility base. Note that TensorFlow does not officially support either platform.
    • Discussion for these changes can be found on SIG Build's TensorFlow Community Forum thread
  • The tf.keras.mixed_precision.experimental API has been removed. The non-experimental symbols under tf.keras.mixed_precision have been available since TensorFlow 2.4 and should be used instead.
    • The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
      • Remove the word "experimental" from tf.keras.mixed_precision symbols. E.g., replace tf.keras.mixed_precision.experimental.global_policy with tf.keras.mixed_precision.global_policy.
      • Replace tf.keras.mixed_precision.experimental.set_policy with tf.keras.mixed_precision.set_global_policy. The experimental symbol set_policy was renamed to set_global_policy in the non-experimental API.
      • Replace LossScaleOptimizer(opt, "dynamic") with LossScaleOptimizer(opt). If you pass anything other than "dynamic" to the second argument, see (1) of the next section.
    • In the following rare cases, you need to make more changes when switching to the non-experimental API:
      • If you passed anything other than "dynamic" to the loss_scale argument (the second argument) of LossScaleOptimizer:
      • If you passed a value to the loss_scale argument (the second argument) of Policy:
        • The experimental version of Policy optionally took in a tf.compat.v1.mixed_precision.LossScale in the constructor, which defaulted to a dynamic loss scale for the "mixed_float16" policy and no loss scale for other policies. In Model.compile, if the model's policy had a loss scale, the optimizer would be wrapped with a LossScaleOptimizer. With the non-experimental Policy, there is no loss scale associated with the Policy, and Model.compile wraps the optimizer with a LossScaleOptimizer if and only if the policy is a "mixed_float16" policy. If you previously passed a LossScale to the experimental Policy, consider just removing it, as the default loss scaling behavior is usually what you want. If you really want to customize the loss scaling behavior, you can wrap your optimizer with a LossScaleOptimizer before passing it to Model.compile.
      • If you use the very rarely-used function tf.keras.mixed_precision.experimental.get_layer_policy:
        • Replace tf.keras.mixed_precision.experimental.get_layer_policy(layer) with layer.dtype_policy.
  • tf.mixed_precision.experimental.LossScale and its subclasses have been removed from the TF2 namespace. This symbols were very rarely used and were only useful in TF2 for use in the now-removed tf.keras.mixed_precision.experimental API. The symbols are still available under tf.compat.v1.mixed_precision.
  • The experimental_relax_shapes heuristic for tf.function has been deprecated and replaced with reduce_retracing which encompasses broader heuristics to reduce the number of retraces (see below)

Major Features and Improvements

  • tf.keras:

    • Added tf.keras.applications.resnet_rs models. This includes the ResNetRS50, ResNetRS101, ResNetRS152, ResNetRS200, ResNetRS270, ResNetRS350 and ResNetRS420 model architectures. The ResNetRS models are based on the architecture described in Revisiting ResNets: Improved Training and Scaling Strategies
    • Added tf.keras.optimizers.experimental.Optimizer. The reworked optimizer gives more control over different phases of optimizer calls, and is easier to customize. We provide Adam, SGD, Adadelta, AdaGrad and RMSprop optimizers based on tf.keras.optimizers.experimental.Optimizer. Generally the new optimizers work in the same way as the old ones, but support new constructor arguments. In the future, the symbols tf.keras.optimizers.Optimizer/Adam/etc will point to the new optimizers, and the previous generation of optimizers will be moved to tf.keras.optimizers.legacy.Optimizer/Adam/etc.
    • Added L2 unit normalization layer tf.keras.layers.UnitNormalization.
    • Added tf.keras.regularizers.OrthogonalRegularizer, a new regularizer that encourages orthogonality between the rows (or columns) or a weight matrix.
    • Added tf.keras.layers.RandomBrightness layer for image preprocessing.
    • Added APIs for switching between interactive logging and absl logging. By default, Keras always writes the logs to stdout. However, this is not optimal in a non-interactive environment, where you don't have access to stdout, but can only view the logs. You can use tf.keras.utils.disable_interactive_logging() to write the logs to ABSL logging. You can also use tf.keras.utils.enable_interactive_logging() to change it back to stdout, or tf.keras.utils.is_interactive_logging_enabled() to check if interactive logging is enabled.
    • Changed default value for the verbose argument of Model.evaluate() and Model.predict() to "auto", which defaults to verbose=1 for most cases and defaults to verbose=2 when used with ParameterServerStrategy or with interactive logging disabled.
    • Argument jit_compile in Model.compile() now applies to Model.evaluate() and Model.predict(). Setting jit_compile=True in compile() compiles the model's training, evaluation, and inference steps to XLA. Note that jit_compile=True may not necessarily work for all models.
    • Added DTensor-related Keras APIs under tf.keras.dtensor namespace. The APIs are still classified as experimental. You are welcome to try it out. Please check the tutoral and guide on https://www.tensorflow.org/ for more details about DTensor.
  • tf.lite:

    • Added TFLite builtin op support for the following TF ops:
      • tf.math.argmin/tf.math.argmax for input data type tf.bool on CPU.
      • tf.nn.gelu op for output data type tf.float32 and quantization on CPU.
    • Add nominal support for unsigned 16-bit integer tensor types. Note that very few TFLite kernels support this type natively, so its use in mobile ML authoring is generally discouraged.
    • Add support for unsigned 16-bit integer tensor types in cast op.
    • Experimental support for lowering list_ops.tensor_list_set_item with DynamicUpdateSlice.
    • Enabled a new MLIR-based dynamic range quantization backend by default
      • The new backend is used for post-training int8 dynamic range quantization and post-training float16 quantization.
      • Set experimental_new_dynamic_range_quantizer in tf.lite.TFLiteConverter to False to disable this change
    • Native TF Lite variables are now enabled during conversion by default on all v2 TfLiteConverter entry points. experimental_enable_resource_variables on tf.lite.TFLiteConverter is now True by default and will be removed in the future.

... (truncated)

Changelog

Sourced from tensorflow's changelog.

Release 2.9.0

Breaking Changes

  • Due to security issues in TF 2.8, all boosted trees code has now been removed (after being deprecated in TF 2.8). Users should switch to TensorFlow Decision Forests.
  • Build, Compilation and Packaging
    • TensorFlow is now compiled with _GLIBCXX_USE_CXX11_ABI=1. Downstream projects that encounter std::__cxx11 or [abi:cxx11] linker errors will need to adopt this compiler option. See the GNU C++ Library docs on Dual ABI.
    • TensorFlow Python wheels now specifically conform to manylinux2014, an upgrade from manylinux2010. The minimum Pip version supporting manylinux2014 is Pip 19.3 (see pypa/manylinux. This change may affect you if you have been using TensorFlow on a very old platform equivalent to CentOS 6, as manylinux2014 targets CentOS 7 as a compatibility base. Note that TensorFlow does not officially support either platform.
    • Discussion for these changes can be found on SIG Build's TensorFlow Community Forum thread
  • The tf.keras.mixed_precision.experimental API has been removed. The non-experimental symbols under tf.keras.mixed_precision have been available since TensorFlow 2.4 and should be used instead.
    • The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
      • Remove the word "experimental" from tf.keras.mixed_precision symbols. E.g., replace tf.keras.mixed_precision.experimental.global_policy with tf.keras.mixed_precision.global_policy.
      • Replace tf.keras.mixed_precision.experimental.set_policy with tf.keras.mixed_precision.set_global_policy. The experimental symbol set_policy was renamed to set_global_policy in the non-experimental API.
      • Replace LossScaleOptimizer(opt, "dynamic") with LossScaleOptimizer(opt). If you pass anything other than "dynamic" to the second argument, see (1) of the next section.
    • In the following rare cases, you need to make more changes when switching to the non-experimental API:
      • If you passed anything other than "dynamic" to the loss_scale argument (the second argument) of LossScaleOptimizer:
      • If you passed a value to the loss_scale argument (the second argument) of Policy:
        • The experimental version of Policy optionally took in a tf.compat.v1.mixed_precision.LossScale in the constructor, which defaulted to a dynamic loss scale for the "mixed_float16" policy and no loss scale for other policies. In Model.compile, if the model's policy had a loss scale, the optimizer would be wrapped with a LossScaleOptimizer. With the non-experimental Policy, there is no loss scale associated with the Policy, and Model.compile wraps the optimizer with a LossScaleOptimizer if and only if the policy is a "mixed_float16" policy. If you previously passed a LossScale to the experimental Policy, consider just removing it, as the default loss scaling behavior is usually what you want. If you really want to customize the loss scaling behavior, you can wrap your optimizer with a LossScaleOptimizer before passing it to Model.compile.
      • If you use the very rarely-used function tf.keras.mixed_precision.experimental.get_layer_policy:
        • Replace tf.keras.mixed_precision.experimental.get_layer_policy(layer) with layer.dtype_policy.
  • tf.mixed_precision.experimental.LossScale and its subclasses have been removed from the TF2 namespace. This symbols were very rarely used and were only useful in TF2 for use in the now-removed tf.keras.mixed_precision.experimental API. The symbols are still available under tf.compat.v1.mixed_precision.
  • The experimental_relax_shapes heuristic for tf.function has been deprecated and replaced with reduce_retracing which encompasses broader heuristics to reduce the number of retraces (see below)

Major Features and Improvements

  • tf.keras:

    • Added tf.keras.applications.resnet_rs models. This includes the ResNetRS50, ResNetRS101, ResNetRS152, ResNetRS200, ResNetRS270, ResNetRS350 and ResNetRS420 model architectures. The ResNetRS models are based on the architecture described in Revisiting ResNets: Improved Training and Scaling Strategies
    • Added tf.keras.optimizers.experimental.Optimizer. The reworked optimizer gives more control over different phases of optimizer calls, and is easier to customize. We provide Adam, SGD, Adadelta, AdaGrad and RMSprop optimizers based on tf.keras.optimizers.experimental.Optimizer. Generally the new optimizers work in the same way as the old ones, but support new constructor arguments. In the future, the symbols tf.keras.optimizers.Optimizer/Adam/etc will point to the new optimizers, and the previous generation of optimizers will be moved to tf.keras.optimizers.legacy.Optimizer/Adam/etc.
    • Added L2 unit normalization layer tf.keras.layers.UnitNormalization.
    • Added tf.keras.regularizers.OrthogonalRegularizer, a new regularizer that encourages orthogonality between the rows (or columns) or a weight matrix.
    • Added tf.keras.layers.RandomBrightness layer for image preprocessing.
    • Added APIs for switching between interactive logging and absl logging. By default, Keras always writes the logs to stdout. However, this is not optimal in a non-interactive environment, where you don't have access to stdout, but can only view the logs. You can use tf.keras.utils.disable_interactive_logging() to write the logs to ABSL logging. You can also use tf.keras.utils.enable_interactive_logging() to change it back to stdout, or tf.keras.utils.is_interactive_logging_enabled() to check if interactive logging is enabled.
    • Changed default value for the verbose argument of Model.evaluate() and Model.predict() to "auto", which defaults to verbose=1 for most cases and defaults to verbose=2 when used with ParameterServerStrategy or with interactive logging disabled.
    • Argument jit_compile in Model.compile() now applies to Model.evaluate() and Model.predict(). Setting jit_compile=True in compile() compiles the model's training, evaluation, and inference steps to XLA. Note that jit_compile=True may not necessarily work for all models.
    • Added DTensor-related Keras APIs under tf.keras.dtensor namespace. The APIs are still classified as experimental. You are welcome to try it out. Please check the tutoral and guide on https://www.tensorflow.org/ for more details about DTensor.
  • tf.lite:

    • Added TFLite builtin op support for the following TF ops:
      • tf.math.argmin/tf.math.argmax for input data type tf.bool on CPU.
      • tf.nn.gelu op for output data type tf.float32 and quantization on CPU.
    • Add nominal support for unsigned 16-bit integer tensor types. Note that very few TFLite kernels support this type natively, so its use in mobile ML authoring is generally discouraged.
    • Add support for unsigned 16-bit integer tensor types in cast op.
    • Experimental support for lowering list_ops.tensor_list_set_item with DynamicUpdateSlice.
    • Enabled a new MLIR-based dynamic range quantization backend by default
      • The new backend is used for post-training int8 dynamic range quantization and post-training float16 quantization.
      • Set experimental_new_dynamic_range_quantizer in tf.lite.TFLiteConverter to False to disable this change
    • Native TF Lite variables are now enabled during conversion by default on all v2 TfLiteConverter entry points. experimental_enable_resource_variables on tf.lite.TFLiteConverter is now True by default and will be removed in the future.
  • tf.function:

... (truncated)

Commits
  • 8a20d54 Merge pull request #56103 from tensorflow-jenkins/version-numbers-2.9.0-6827
  • d4ee584 Update version numbers to 2.9.0
  • 31add77 Merge pull request #56087 from tensorflow/fix-relnotes-on-r2.9
  • 892677b Update release notes with security updates
  • 186b7e5 Merge pull request #56067 from tensorflow/mm-cp-52488e5072f6fe44411d70c6af09e...
  • 8422d91 Merge pull request #56060 from yongtang:curl-7.83.1
  • 5f2a685 Merge pull request #56039 from tensorflow/penpornk-patch-1
  • 83b37ad Merge pull request #56032 from penpornk/cherrypicks_1HAY1
  • 4f56d50 Update RELEASE.md
  • a09f743 Update RELEASE.md
  • Additional commits viewable in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Bumps [tensorflow](https://github.com/tensorflow/tensorflow) from 2.6.2 to 2.9.0.
- [Release notes](https://github.com/tensorflow/tensorflow/releases)
- [Changelog](https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md)
- [Commits](tensorflow/tensorflow@v2.6.2...v2.9.0)

---
updated-dependencies:
- dependency-name: tensorflow
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label May 21, 2022
@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github May 28, 2022

Superseded by #18.

@dependabot dependabot bot closed this May 28, 2022
@dependabot dependabot bot deleted the dependabot/pip/python/requirements/ml/tensorflow-2.9.0 branch May 28, 2022 07:17
nikitavemuri pushed a commit that referenced this pull request Jul 28, 2022
We encountered SIGSEGV when running Python test `python/ray/tests/test_failure_2.py::test_list_named_actors_timeout`. The stack is:

```
#0  0x00007fffed30f393 in std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(std::string const&) ()
   from /lib64/libstdc++.so.6
#1  0x00007fffee707649 in ray::RayLog::GetLoggerName() () from /home/admin/dev/Arc/merge/ray/python/ray/_raylet.so
#2  0x00007fffee70aa90 in ray::SpdLogMessage::Flush() () from /home/admin/dev/Arc/merge/ray/python/ray/_raylet.so
#3  0x00007fffee70af28 in ray::RayLog::~RayLog() () from /home/admin/dev/Arc/merge/ray/python/ray/_raylet.so
#4  0x00007fffee2b570d in ray::asio::testing::(anonymous namespace)::DelayManager::Init() [clone .constprop.0] ()
   from /home/admin/dev/Arc/merge/ray/python/ray/_raylet.so
#5  0x00007fffedd0d95a in _GLOBAL__sub_I_asio_chaos.cc () from /home/admin/dev/Arc/merge/ray/python/ray/_raylet.so
#6  0x00007ffff7fe282a in call_init.part () from /lib64/ld-linux-x86-64.so.2
#7  0x00007ffff7fe2931 in _dl_init () from /lib64/ld-linux-x86-64.so.2
#8  0x00007ffff7fe674c in dl_open_worker () from /lib64/ld-linux-x86-64.so.2
#9  0x00007ffff7b82e79 in _dl_catch_exception () from /lib64/libc.so.6
#10 0x00007ffff7fe5ffe in _dl_open () from /lib64/ld-linux-x86-64.so.2
#11 0x00007ffff7d5f39c in dlopen_doit () from /lib64/libdl.so.2
#12 0x00007ffff7b82e79 in _dl_catch_exception () from /lib64/libc.so.6
#13 0x00007ffff7b82f13 in _dl_catch_error () from /lib64/libc.so.6
#14 0x00007ffff7d5fb09 in _dlerror_run () from /lib64/libdl.so.2
#15 0x00007ffff7d5f42a in dlopen@@GLIBC_2.2.5 () from /lib64/libdl.so.2
#16 0x00007fffef04d330 in py_dl_open (self=<optimized out>, args=<optimized out>)
    at /tmp/python-build.20220507135524.257789/Python-3.7.11/Modules/_ctypes/callproc.c:1369
```

The root cause is that when loading `_raylet.so`, `static DelayManager _delay_manager` is initialized and `RAY_LOG(ERROR) << "RAY_testing_asio_delay_us is set to " << delay_env;` is executed. However, the static variables declared in `logging.cc` are not initialized yet (in this case, `std::string RayLog::logger_name_ = "ray_log_sink"`).

It's better not to rely on the initialization order of static variables in different compilation units because it's not guaranteed. I propose to change all `RAY_LOG`s to `std::cerr` in `DelayManager::Init()`.

The crash happens in Ant's internal codebase. Not sure why this test case passes in the community version though.

BTW, I've tried different approaches:

1. Using a static local variable in `get_delay_us` and remove the global variable. This doesn't work because `init()` needs to access the variable as well.
2. Defining the global variable as type `std::unique_ptr<DelayManager>` and initialize it in `get_delay_us`. This works but it requires a lock to be thread-safe.
nikitavemuri pushed a commit that referenced this pull request Jun 3, 2024
…sume a tune.Tuner.fit() experiment from a checkpoint. (ray-project#45681)
nikitavemuri pushed a commit that referenced this pull request Jun 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant