Get Started Features Ecosystem Docs & Tutorials Trusted By GitHub
Apache
Apache Software Foundation Apache Incubator License Security Privacy Events Sponsorship Thanks
1.9.1
master
1.9.1
1.8.0
1.7.0
1.6.0
1.5.0
1.4.1
1.3.1
1.2.1
1.1.0
1.0.0
0.12.1
0.11.0
Python Module Index
Quick search
Table Of Contents
  • Python Tutorials
    • Getting Started
      • Crash Course
        • Manipulate data with ndarray
        • Create a neural network
        • Automatic differentiation with autograd
        • Train the neural network
        • Predict with a pre-trained model
        • Use GPUs
      • Moving to MXNet from Other Frameworks
        • PyTorch vs Apache MXNet
      • Gluon: from experiment to deployment
      • Logistic regression explained
      • MNIST
    • Packages
      • Automatic Differentiation
      • Gluon
        • Blocks
          • Custom Layers
          • Customer Layers (Beginners)
          • Hybridize
          • Initialization
          • Parameter and Block Naming
          • Layers and Blocks
          • Parameter Management
          • Saving and Loading Gluon Models
          • Activation Blocks
        • Data Tutorials
          • Image Augmentation
          • Spatial Augmentation
          • Color Augmentation
          • Composed Augmentations
          • Gluon Datasets and DataLoader
          • Using own data with included Datasets
          • Using own data with custom Datasets
          • Appendix: Upgrading from Module DataIter to Gluon DataLoader
        • Image Tutorials
          • Image Augmentation
          • Image similarity search with InfoGAN
          • Handwritten Digit Recognition
          • Using pre-trained models in MXNet
        • Losses
          • Custom Loss Blocks
          • Kullback-Leibler (KL) Divergence
          • Loss functions
        • Text Tutorials
          • Google Neural Machine Translation
          • Machine Translation with Transformer
        • Training
          • MXNet Gluon Fit API
          • Trainer
          • Learning Rates
            • Learning Rate Finder
            • Learning Rate Schedules
            • Advanced Learning Rate Schedules
          • Normalization Blocks
      • KVStore
        • Distributed Key-Value Store
      • NDArray
        • An Intro: Manipulate Data the MXNet Way with NDArray
        • NDArray Operations
        • NDArray Contexts
        • Gotchas using NumPy in Apache MXNet
        • Tutorials
          • CSRNDArray - NDArray in Compressed Sparse Row Storage Format
          • RowSparseNDArray - NDArray for Sparse Gradient Updates
          • Train a Linear Regression Model with Sparse Symbols
          • Sparse NDArrays with Gluon
      • ONNX
        • Fine-tuning an ONNX model
        • Running inference on MXNet/Gluon from an ONNX model
        • Importing an ONNX model into MXNet
        • Export ONNX Models
      • Optimizers
      • Visualization
        • Visualize networks
    • Performance
      • Compression
        • Deploy with int-8
        • Float16
        • Gradient Compression
        • GluonCV with Quantized Models
      • Accelerated Backend Tools
        • Intel MKL-DNN
          • Quantize with MKL-DNN backend
          • Improving accuracy with Intel® Neural Compressor
          • Install MXNet with MKL-DNN
        • TensorRT
          • Optimizing Deep Learning Computation Graphs with TensorRT
        • Use TVM
        • Profiling MXNet Models
        • Using AMP: Automatic Mixed Precision
    • Deployment
      • Export
        • Exporting to ONNX format
        • Export Gluon CV Models
        • Save / Load Parameters
      • Inference
        • Deploy into C++
        • Image Classication using pretrained ResNet-50 model on Jetson module
        • Deploy into a Java or Scala Environment
        • Real-time Object Detection with MXNet On The Raspberry Pi
      • Run on AWS
        • Run on an EC2 Instance
        • Run on Amazon SageMaker
        • MXNet on the Cloud
    • Extend
      • Custom Layers
      • Custom Numpy Operators
      • New Operator Creation
      • New Operator in MXNet Backend
  • Python API
    • mxnet.ndarray
      • ndarray
      • ndarray.contrib
      • ndarray.image
      • ndarray.linalg
      • ndarray.op
      • ndarray.random
      • ndarray.register
      • ndarray.sparse
      • ndarray.utils
    • mxnet.gluon
      • gluon.Block
      • gluon.HybridBlock
      • gluon.SymbolBlock
      • gluon.Constant
      • gluon.Parameter
      • gluon.ParameterDict
      • gluon.Trainer
      • gluon.contrib
      • gluon.data
        • data.vision
          • vision.datasets
          • vision.transforms
      • gluon.loss
      • gluon.model_zoo.vision
      • gluon.nn
      • gluon.rnn
      • gluon.utils
    • mxnet.autograd
    • mxnet.initializer
    • mxnet.optimizer
    • mxnet.lr_scheduler
    • mxnet.metric
    • mxnet.kvstore
    • mxnet.symbol
      • symbol
      • symbol.contrib
      • symbol.image
      • symbol.linalg
      • symbol.op
      • symbol.random
      • symbol.register
      • symbol.sparse
    • mxnet.module
    • mxnet.contrib
      • contrib.autograd
      • contrib.io
      • contrib.ndarray
      • contrib.onnx
      • contrib.quantization
      • contrib.symbol
      • contrib.tensorboard
      • contrib.tensorrt
      • contrib.text
    • mxnet
      • mxnet.attribute
      • mxnet.base
      • mxnet.callback
      • mxnet.context
      • mxnet.engine
      • mxnet.executor
      • mxnet.executor_manager
      • mxnet.image
      • mxnet.io
      • mxnet.kvstore_server
      • mxnet.libinfo
      • mxnet.log
      • mxnet.model
      • mxnet.monitor
      • mxnet.name
      • mxnet.notebook
      • mxnet.operator
      • mxnet.profiler
      • mxnet.random
      • mxnet.recordio
      • mxnet.registry
      • mxnet.rtc
      • mxnet.runtime
      • mxnet.test_utils
      • mxnet.torch
      • mxnet.util
      • mxnet.visualization
Table Of Contents
  • Python Tutorials
    • Getting Started
      • Crash Course
        • Manipulate data with ndarray
        • Create a neural network
        • Automatic differentiation with autograd
        • Train the neural network
        • Predict with a pre-trained model
        • Use GPUs
      • Moving to MXNet from Other Frameworks
        • PyTorch vs Apache MXNet
      • Gluon: from experiment to deployment
      • Logistic regression explained
      • MNIST
    • Packages
      • Automatic Differentiation
      • Gluon
        • Blocks
          • Custom Layers
          • Customer Layers (Beginners)
          • Hybridize
          • Initialization
          • Parameter and Block Naming
          • Layers and Blocks
          • Parameter Management
          • Saving and Loading Gluon Models
          • Activation Blocks
        • Data Tutorials
          • Image Augmentation
          • Spatial Augmentation
          • Color Augmentation
          • Composed Augmentations
          • Gluon Datasets and DataLoader
          • Using own data with included Datasets
          • Using own data with custom Datasets
          • Appendix: Upgrading from Module DataIter to Gluon DataLoader
        • Image Tutorials
          • Image Augmentation
          • Image similarity search with InfoGAN
          • Handwritten Digit Recognition
          • Using pre-trained models in MXNet
        • Losses
          • Custom Loss Blocks
          • Kullback-Leibler (KL) Divergence
          • Loss functions
        • Text Tutorials
          • Google Neural Machine Translation
          • Machine Translation with Transformer
        • Training
          • MXNet Gluon Fit API
          • Trainer
          • Learning Rates
            • Learning Rate Finder
            • Learning Rate Schedules
            • Advanced Learning Rate Schedules
          • Normalization Blocks
      • KVStore
        • Distributed Key-Value Store
      • NDArray
        • An Intro: Manipulate Data the MXNet Way with NDArray
        • NDArray Operations
        • NDArray Contexts
        • Gotchas using NumPy in Apache MXNet
        • Tutorials
          • CSRNDArray - NDArray in Compressed Sparse Row Storage Format
          • RowSparseNDArray - NDArray for Sparse Gradient Updates
          • Train a Linear Regression Model with Sparse Symbols
          • Sparse NDArrays with Gluon
      • ONNX
        • Fine-tuning an ONNX model
        • Running inference on MXNet/Gluon from an ONNX model
        • Importing an ONNX model into MXNet
        • Export ONNX Models
      • Optimizers
      • Visualization
        • Visualize networks
    • Performance
      • Compression
        • Deploy with int-8
        • Float16
        • Gradient Compression
        • GluonCV with Quantized Models
      • Accelerated Backend Tools
        • Intel MKL-DNN
          • Quantize with MKL-DNN backend
          • Improving accuracy with Intel® Neural Compressor
          • Install MXNet with MKL-DNN
        • TensorRT
          • Optimizing Deep Learning Computation Graphs with TensorRT
        • Use TVM
        • Profiling MXNet Models
        • Using AMP: Automatic Mixed Precision
    • Deployment
      • Export
        • Exporting to ONNX format
        • Export Gluon CV Models
        • Save / Load Parameters
      • Inference
        • Deploy into C++
        • Image Classication using pretrained ResNet-50 model on Jetson module
        • Deploy into a Java or Scala Environment
        • Real-time Object Detection with MXNet On The Raspberry Pi
      • Run on AWS
        • Run on an EC2 Instance
        • Run on Amazon SageMaker
        • MXNet on the Cloud
    • Extend
      • Custom Layers
      • Custom Numpy Operators
      • New Operator Creation
      • New Operator in MXNet Backend
  • Python API
    • mxnet.ndarray
      • ndarray
      • ndarray.contrib
      • ndarray.image
      • ndarray.linalg
      • ndarray.op
      • ndarray.random
      • ndarray.register
      • ndarray.sparse
      • ndarray.utils
    • mxnet.gluon
      • gluon.Block
      • gluon.HybridBlock
      • gluon.SymbolBlock
      • gluon.Constant
      • gluon.Parameter
      • gluon.ParameterDict
      • gluon.Trainer
      • gluon.contrib
      • gluon.data
        • data.vision
          • vision.datasets
          • vision.transforms
      • gluon.loss
      • gluon.model_zoo.vision
      • gluon.nn
      • gluon.rnn
      • gluon.utils
    • mxnet.autograd
    • mxnet.initializer
    • mxnet.optimizer
    • mxnet.lr_scheduler
    • mxnet.metric
    • mxnet.kvstore
    • mxnet.symbol
      • symbol
      • symbol.contrib
      • symbol.image
      • symbol.linalg
      • symbol.op
      • symbol.random
      • symbol.register
      • symbol.sparse
    • mxnet.module
    • mxnet.contrib
      • contrib.autograd
      • contrib.io
      • contrib.ndarray
      • contrib.onnx
      • contrib.quantization
      • contrib.symbol
      • contrib.tensorboard
      • contrib.tensorrt
      • contrib.text
    • mxnet
      • mxnet.attribute
      • mxnet.base
      • mxnet.callback
      • mxnet.context
      • mxnet.engine
      • mxnet.executor
      • mxnet.executor_manager
      • mxnet.image
      • mxnet.io
      • mxnet.kvstore_server
      • mxnet.libinfo
      • mxnet.log
      • mxnet.model
      • mxnet.monitor
      • mxnet.name
      • mxnet.notebook
      • mxnet.operator
      • mxnet.profiler
      • mxnet.random
      • mxnet.recordio
      • mxnet.registry
      • mxnet.rtc
      • mxnet.runtime
      • mxnet.test_utils
      • mxnet.torch
      • mxnet.util
      • mxnet.visualization

Python Module Index

m
 
m
- mxnet
    mxnet.attribute
    mxnet.context
    mxnet.contrib.autograd
    mxnet.contrib.symbol
    mxnet.engine
    mxnet.executor_manager
    mxnet.libinfo
    mxnet.name
    mxnet.ndarray
    mxnet.ndarray.contrib
    mxnet.ndarray.image
    mxnet.ndarray.linalg
    mxnet.ndarray.op
    mxnet.ndarray.random
    mxnet.ndarray.register
    mxnet.ndarray.sparse
    mxnet.ndarray.utils
    mxnet.random
    mxnet.runtime
    mxnet.symbol
    mxnet.symbol.contrib
    mxnet.symbol.image
    mxnet.symbol.linalg
    mxnet.symbol.op
    mxnet.symbol.random
    mxnet.symbol.register
    mxnet.symbol.sparse
    mxnet.util

Did this page help you?
Yes
No
Thanks for your feedback!

Resources

  • Dev list
  • User mailing list
  • Developer Wiki
  • Jira Tracker
  • Github Roadmap
  • Blog
  • Forum
  • Contribute
  • apache/mxnet
  • apachemxnet
  • apachemxnet

A flexible and efficient library for deep learning.

Apache MXNet is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.

"Copyright © 2017-2018, The Apache Software Foundation Apache MXNet, MXNet, Apache, the Apache feather, and the Apache MXNet project logo are either registered trademarks or trademarks of the Apache Software Foundation."