DNN (Deep Neural Network)
Deep neural network (DNN) with weight sparsity control (i.e., L1-norm regularization) improved the classification performance using whole-brain resting-state functional connectivity patterns of schizophrenia patient and healthy groups. Initializing DNN’s weights through stacked auto-encoder enhanced the classification performance as well. (Kim et al., NI, 2016). Here, we provide MATLAB and Python based codes in terms of the DNN with the weight sparsity control (MATLAB code / Python code download).
Reference: Kim et al., Deep neural network with weight sparsity control and pre-training extracts hierarchical features and enhances classification performance: Evidence from whole-brain resting-state functional connectivity patterns of schizophrenia. NeuroImage. 2016 Jan.; 124(Pt A):127-46. [ PubMed / Google Scholar ]
MATLAB code (download)
- Download the MATLAB code as the above link
- Open “test_WeightSparsity.m” file and prepare/load your own data
- Divide your own data into training and test data set (i.e. ‘train_x’, ‘train_y’, ‘test_x’, ‘test_y’)
- Determine how many hidden layers/nodes we are going to use
If you have two classes and want to use two hidden layers with 100 nodes, you can set as the following:
>> nn = nnsetup([784 100 100 2]);
- Set the target weight sparsity levels at each layer (e.g. nn.nzr = [0.2 0.2]) and other parameters (e.g. learning rate, batch size etc.).
- Run ‘test_WeightSparsity.m’.
- Make sure the convergence of the weight sparsity level to the target level for each hidden layer
The MATLAB codes were modified from the DeepLearnToolbox to apply non-zero ratio for weight sparsity control.
Python code (download)
- Download the MNIST data from the link
- Set parameters (e.g., learning rate, regularization term, etc.)in the “mlp_h3.py” code
- Run the python code using “mlp_h3.py”
The Python codes were modified from the DeepLearningTutorials to apply non-zero ratio for weight sparsity control.
Please also check out our github!