10 Commits

Author SHA1 Message Date
Xintao
935993a040 fix colab link error 2021-07-31 12:34:09 +08:00
Xintao
74fcfea286 fix bug: extension 2021-07-28 15:21:47 +08:00
Xintao
da1e1ee805 update Readme.txt 2021-07-28 03:01:47 +08:00
Xintao
1d8745eb61 update Readme 2021-07-28 02:59:20 +08:00
Xintao
c94d2de155 support more inference features: tile, alpha channel, gray image, 16-bit 2021-07-28 02:56:22 +08:00
Xintao
f4297a70af Merge branch 'master' of github.com:xinntao/Real-ESRGAN 2021-07-27 11:07:05 +08:00
Xintao
492a829c14 fix bug: gt_sum; fix typo 2021-07-27 11:06:43 +08:00
Xintao
0573f32dd0 Create LICENSE 2021-07-26 00:33:08 +08:00
Xintao
8454fd2c7a add pytorch2onnx 2021-07-25 16:16:57 +08:00
Xintao
ad2ff81725 add RealESRNet model, fix bug in exe file 2021-07-25 16:16:25 +08:00
9 changed files with 291 additions and 48 deletions

29
LICENSE Normal file
View File

@@ -0,0 +1,29 @@
BSD 3-Clause License
Copyright (c) 2021, Xintao Wang
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -5,13 +5,16 @@
[![LICENSE](https://img.shields.io/github/license/xinntao/Real-ESRGAN.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/LICENSE) [![LICENSE](https://img.shields.io/github/license/xinntao/Real-ESRGAN.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/LICENSE)
[![python lint](https://github.com/xinntao/Real-ESRGAN/actions/workflows/pylint.yml/badge.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/.github/workflows/pylint.yml) [![python lint](https://github.com/xinntao/Real-ESRGAN/actions/workflows/pylint.yml/badge.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/.github/workflows/pylint.yml)
1. [Colab Demo](https://colab.research.google.com/drive/1sVsoBd9AjckIXThgtZhGrHRfFI6UUYOo) for Real-ESRGAN <a href="https://colab.research.google.com/drive/1k2Zod6kSHEvraybHl50Lys0LerhyTMCo?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>. 1. [Colab Demo](https://colab.research.google.com/drive/1k2Zod6kSHEvraybHl50Lys0LerhyTMCo?usp=sharing) for Real-ESRGAN <a href="https://colab.research.google.com/drive/1k2Zod6kSHEvraybHl50Lys0LerhyTMCo?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>.
2. [Portable Windows executable file](https://github.com/xinntao/Real-ESRGAN/releases). You can find more information [here](#Portable-executable-files). 2. [Portable Windows executable file](https://github.com/xinntao/Real-ESRGAN/releases). You can find more information [here](#Portable-executable-files).
Real-ESRGAN aims at developing **Practical Algorithms for General Image Restoration**.<br> Real-ESRGAN aims at developing **Practical Algorithms for General Image Restoration**.<br>
We extend the powerful ESRGAN to a practical restoration application (namely, Real-ESRGAN), which is trained with pure synthetic data. We extend the powerful ESRGAN to a practical restoration application (namely, Real-ESRGAN), which is trained with pure synthetic data.
:triangular_flag_on_post: The training codes have been released. A detailed guide can be found in [Training.md](Training.md). :triangular_flag_on_post: **Updates**
- :white_check_mark: [The inference code](inference_realesrgan.py) supports: 1) **tile** options; 2) images with **alpha channel**; 3) **gray** images; 4) **16-bit** images.
- :white_check_mark: The training codes have been released. A detailed guide can be found in [Training.md](Training.md).
### :book: Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data ### :book: Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data
@@ -49,7 +52,7 @@ If you have some images that Real-ESRGAN could not well restored, please also op
### Portable executable files ### Portable executable files
You can download **Windows executable files** from https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN-ncnn-vulkan.zip You can download **Windows executable files** from https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRGAN-ncnn-vulkan-20210725-windows.zip
This executable file is **portable** and includes all the binaries and models required. No CUDA or PyTorch environment is needed.<br> This executable file is **portable** and includes all the binaries and models required. No CUDA or PyTorch environment is needed.<br>
@@ -59,6 +62,14 @@ You can simply run the following command:
./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png ./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png
``` ```
We have provided three models:
1. realesrgan-x4plus (default)
2. realesrnet-x4plus
3. esrgan-x4
You can use the `-n` argument for other models, for example, `./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n realesrnet-x4plus`
Note that it may introduce block inconsistency (and also generate slightly different results from the PyTorch implementation), because this executable file first crops the input image into several tiles, and then processes them separately, finally stitches together. Note that it may introduce block inconsistency (and also generate slightly different results from the PyTorch implementation), because this executable file first crops the input image into several tiles, and then processes them separately, finally stitches together.
This executable file is based on the wonderful [Tencent/ncnn](https://github.com/Tencent/ncnn) and [realsr-ncnn-vulkan](https://github.com/nihui/realsr-ncnn-vulkan) by [nihui](https://github.com/nihui). This executable file is based on the wonderful [Tencent/ncnn](https://github.com/Tencent/ncnn) and [realsr-ncnn-vulkan](https://github.com/nihui/realsr-ncnn-vulkan) by [nihui](https://github.com/nihui).
@@ -106,6 +117,12 @@ python inference_realesrgan.py --model_path experiments/pretrained_models/RealES
Results are in the `results` folder Results are in the `results` folder
## :european_castle: Model Zoo
- [RealESRGAN-x4plus](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth)
- [RealESRNet-x4plus](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRNet_x4plus.pth)
- [official ESRGAN-x4](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth)
## :computer: Training ## :computer: Training
A detailed guide can be found in [Training.md](Training.md). A detailed guide can be found in [Training.md](Training.md).

View File

@@ -34,7 +34,10 @@ DF2K_HR_sub/000001_s003.png
## Train Real-ESRNet ## Train Real-ESRNet
1. Download pre-trained model [ESRGAN](https://drive.google.com/file/d/1b3_bWZTjNO3iL2js1yWkJfjZykcQgvzT/view?usp=sharing) into `experiments/pretrained_models`. 1. Download pre-trained model [ESRGAN](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth) into `experiments/pretrained_models`.
```bash
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth -P experiments/pretrained_models
```
1. Modify the content in the option file `options/train_realesrnet_x4plus.yml` accordingly: 1. Modify the content in the option file `options/train_realesrnet_x4plus.yml` accordingly:
```yml ```yml
train: train:

View File

@@ -1,6 +1,7 @@
import argparse import argparse
import cv2 import cv2
import glob import glob
import math
import numpy as np import numpy as np
import os import os
import torch import torch
@@ -10,59 +11,233 @@ from torch.nn import functional as F
def main(): def main():
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
parser.add_argument('--model_path', type=str, default='experiments/pretrained_models/RealESRGAN_x4plus.pth') parser.add_argument('--input', type=str, default='inputs', help='Input image or folder')
parser.add_argument('--scale', type=int, default=4) parser.add_argument(
parser.add_argument('--input', type=str, default='inputs', help='input image or folder') '--model_path',
type=str,
default='experiments/pretrained_models/RealESRGAN_x4plus.pth',
help='Path to the pre-trained model')
parser.add_argument('--scale', type=int, default=4, help='Upsample scale factor')
parser.add_argument('--suffix', type=str, default='out', help='Suffix of the restored image')
parser.add_argument('--tile', type=int, default=0, help='Tile size, 0 for no tile during testing')
parser.add_argument('--tile_pad', type=int, default=10, help='Tile padding')
parser.add_argument('--pre_pad', type=int, default=0, help='Pre padding size at each border')
parser.add_argument(
'--alpha_upsampler',
type=str,
default='realesrgan',
help='The upsampler for the alpha channels. Options: realesrgan | bicubic')
parser.add_argument(
'--ext',
type=str,
default='auto',
help='Image extension. Options: auto | jpg | png, auto means using the same extension as inputs')
args = parser.parse_args() args = parser.parse_args()
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') upsampler = RealESRGANer(
# set up model scale=args.scale, model_path=args.model_path, tile=args.tile, tile_pad=args.tile_pad, pre_pad=args.pre_pad)
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=args.scale)
loadnet = torch.load(args.model_path)
model.load_state_dict(loadnet['params_ema'], strict=True)
model.eval()
model = model.to(device)
os.makedirs('results/', exist_ok=True) os.makedirs('results/', exist_ok=True)
for idx, path in enumerate(sorted(glob.glob(os.path.join(args.input, '*')))): if os.path.isfile(args.input):
imgname = os.path.splitext(os.path.basename(path))[0] paths = [args.input]
else:
paths = sorted(glob.glob(os.path.join(args.input, '*')))
for idx, path in enumerate(paths):
imgname, extension = os.path.splitext(os.path.basename(path))
print('Testing', idx, imgname) print('Testing', idx, imgname)
# read image
img = cv2.imread(path, cv2.IMREAD_COLOR).astype(np.float32) / 255.
img = torch.from_numpy(np.transpose(img[:, :, [2, 1, 0]], (2, 0, 1))).float()
img = img.unsqueeze(0).to(device)
if args.scale == 2: # ------------------------------ read image ------------------------------ #
mod_scale = 2 img = cv2.imread(path, cv2.IMREAD_UNCHANGED).astype(np.float32)
elif args.scale == 1: if np.max(img) > 255: # 16-bit image
mod_scale = 4 max_range = 65535
print('\tInput is a 16-bit image')
else: else:
mod_scale = None max_range = 255
if mod_scale is not None: img = img / max_range
h_pad, w_pad = 0, 0 if len(img.shape) == 2: # gray image
_, _, h, w = img.size() img_mode = 'L'
if (h % mod_scale != 0): img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB)
h_pad = (mod_scale - h % mod_scale) elif img.shape[2] == 4: # RGBA image with alpha channel
if (w % mod_scale != 0): img_mode = 'RGBA'
w_pad = (mod_scale - w % mod_scale) alpha = img[:, :, 3]
img = F.pad(img, (0, w_pad, 0, h_pad), 'reflect') img = img[:, :, 0:3]
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
if args.alpha_upsampler == 'realesrgan':
alpha = cv2.cvtColor(alpha, cv2.COLOR_GRAY2RGB)
else:
img_mode = 'RGB'
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# ------------------- process image (without the alpha channel) ------------------- #
upsampler.pre_process(img)
if args.tile:
upsampler.tile_process()
else:
upsampler.process()
output_img = upsampler.post_process()
output_img = output_img.data.squeeze().float().cpu().clamp_(0, 1).numpy()
output_img = np.transpose(output_img[[2, 1, 0], :, :], (1, 2, 0))
if img_mode == 'L':
output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2GRAY)
# ------------------- process the alpha channel if necessary ------------------- #
if img_mode == 'RGBA':
if args.alpha_upsampler == 'realesrgan':
upsampler.pre_process(alpha)
if args.tile:
upsampler.tile_process()
else:
upsampler.process()
output_alpha = upsampler.post_process()
output_alpha = output_alpha.data.squeeze().float().cpu().clamp_(0, 1).numpy()
output_alpha = np.transpose(output_alpha[[2, 1, 0], :, :], (1, 2, 0))
output_alpha = cv2.cvtColor(output_alpha, cv2.COLOR_BGR2GRAY)
else:
h, w = alpha.shape[0:2]
output_alpha = cv2.resize(alpha, (w * args.scale, h * args.scale), interpolation=cv2.INTER_LINEAR)
# merge the alpha channel
output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2BGRA)
output_img[:, :, 3] = output_alpha
# ------------------------------ save image ------------------------------ #
if args.ext == 'auto':
extension = extension[1:]
else:
extension = args.ext
if img_mode == 'RGBA': # RGBA images should be saved in png format
extension = 'png'
save_path = f'results/{imgname}_{args.suffix}.{extension}'
if max_range == 65535: # 16-bit image
output = (output_img * 65535.0).round().astype(np.uint16)
else:
output = (output_img * 255.0).round().astype(np.uint8)
cv2.imwrite(save_path, output)
class RealESRGANer():
def __init__(self, scale, model_path, tile=0, tile_pad=10, pre_pad=10):
self.scale = scale
self.tile_size = tile
self.tile_pad = tile_pad
self.pre_pad = pre_pad
self.mod_scale = None
# initialize model
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=scale)
loadnet = torch.load(model_path)
if 'params_ema' in loadnet:
keyname = 'params_ema'
else:
keyname = 'params'
model.load_state_dict(loadnet[keyname], strict=True)
model.eval()
self.model = model.to(self.device)
def pre_process(self, img):
img = torch.from_numpy(np.transpose(img, (2, 0, 1))).float()
self.img = img.unsqueeze(0).to(self.device)
# pre_pad
if self.pre_pad != 0:
self.img = F.pad(self.img, (0, self.pre_pad, 0, self.pre_pad), 'reflect')
# mod pad
if self.scale == 2:
self.mod_scale = 2
elif self.scale == 1:
self.mod_scale = 4
if self.mod_scale is not None:
self.mod_pad_h, self.mod_pad_w = 0, 0
_, _, h, w = self.img.size()
if (h % self.mod_scale != 0):
self.mod_pad_h = (self.mod_scale - h % self.mod_scale)
if (w % self.mod_scale != 0):
self.mod_pad_w = (self.mod_scale - w % self.mod_scale)
self.img = F.pad(self.img, (0, self.mod_pad_w, 0, self.mod_pad_h), 'reflect')
def process(self):
try: try:
# inference # inference
with torch.no_grad(): with torch.no_grad():
output = model(img) self.output = self.model(self.img)
# remove extra pad
if mod_scale is not None:
_, _, h, w = output.size()
output = output[:, :, 0:h - h_pad, 0:w - w_pad]
# save image
output = output.data.squeeze().float().cpu().clamp_(0, 1).numpy()
output = np.transpose(output[[2, 1, 0], :, :], (1, 2, 0))
output = (output * 255.0).round().astype(np.uint8)
cv2.imwrite(f'results/{imgname}_RealESRGAN.png', output)
except Exception as error: except Exception as error:
print('Error', error) print('Error', error)
def tile_process(self):
"""Modified from: https://github.com/ata4/esrgan-launcher
"""
batch, channel, height, width = self.img.shape
output_height = height * self.scale
output_width = width * self.scale
output_shape = (batch, channel, output_height, output_width)
# start with black image
self.output = self.img.new_zeros(output_shape)
tiles_x = math.ceil(width / self.tile_size)
tiles_y = math.ceil(height / self.tile_size)
# loop over all tiles
for y in range(tiles_y):
for x in range(tiles_x):
# extract tile from input image
ofs_x = x * self.tile_size
ofs_y = y * self.tile_size
# input tile area on total image
input_start_x = ofs_x
input_end_x = min(ofs_x + self.tile_size, width)
input_start_y = ofs_y
input_end_y = min(ofs_y + self.tile_size, height)
# input tile area on total image with padding
input_start_x_pad = max(input_start_x - self.tile_pad, 0)
input_end_x_pad = min(input_end_x + self.tile_pad, width)
input_start_y_pad = max(input_start_y - self.tile_pad, 0)
input_end_y_pad = min(input_end_y + self.tile_pad, height)
# input tile dimensions
input_tile_width = input_end_x - input_start_x
input_tile_height = input_end_y - input_start_y
tile_idx = y * tiles_x + x + 1
input_tile = self.img[:, :, input_start_y_pad:input_end_y_pad, input_start_x_pad:input_end_x_pad]
# upscale tile
try:
with torch.no_grad():
output_tile = self.model(input_tile)
except Exception as error:
print('Error', error)
print(f'\tTile {tile_idx}/{tiles_x * tiles_y}')
# output tile area on total image
output_start_x = input_start_x * self.scale
output_end_x = input_end_x * self.scale
output_start_y = input_start_y * self.scale
output_end_y = input_end_y * self.scale
# output tile area without padding
output_start_x_tile = (input_start_x - input_start_x_pad) * self.scale
output_end_x_tile = output_start_x_tile + input_tile_width * self.scale
output_start_y_tile = (input_start_y - input_start_y_pad) * self.scale
output_end_y_tile = output_start_y_tile + input_tile_height * self.scale
# put tile into output image
self.output[:, :, output_start_y:output_end_y,
output_start_x:output_end_x] = output_tile[:, :, output_start_y_tile:output_end_y_tile,
output_start_x_tile:output_end_x_tile]
def post_process(self):
# remove extra pad
if self.mod_scale is not None:
_, _, h, w = self.output.size()
self.output = self.output[:, :, 0:h - self.mod_pad_h * self.scale, 0:w - self.mod_pad_w * self.scale]
# remove prepad
if self.pre_pad != 0:
_, _, h, w = self.output.size()
self.output = self.output[:, :, 0:h - self.pre_pad * self.scale, 0:w - self.pre_pad * self.scale]
return self.output
if __name__ == '__main__': if __name__ == '__main__':
main() main()

BIN
inputs/tree_alpha_16bit.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 373 KiB

BIN
inputs/wolf_gray.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

View File

@@ -18,7 +18,7 @@ class RealESRGANModel(SRGANModel):
def __init__(self, opt): def __init__(self, opt):
super(RealESRGANModel, self).__init__(opt) super(RealESRGANModel, self).__init__(opt)
self.jpeger = DiffJPEG(differentiable=False).cuda() self.jpeger = DiffJPEG(differentiable=False).cuda()
self.usm_shaper = USMSharp().cuda() self.usm_sharpener = USMSharp().cuda()
self.queue_size = opt['queue_size'] self.queue_size = opt['queue_size']
@torch.no_grad() @torch.no_grad()
@@ -58,7 +58,7 @@ class RealESRGANModel(SRGANModel):
if self.is_train: if self.is_train:
# training data synthesis # training data synthesis
self.gt = data['gt'].to(self.device) self.gt = data['gt'].to(self.device)
self.gt_usm = self.usm_shaper(self.gt) self.gt_usm = self.usm_sharpener(self.gt)
self.kernel1 = data['kernel1'].to(self.device) self.kernel1 = data['kernel1'].to(self.device)
self.kernel2 = data['kernel2'].to(self.device) self.kernel2 = data['kernel2'].to(self.device)
@@ -160,6 +160,8 @@ class RealESRGANModel(SRGANModel):
# training pair pool # training pair pool
self._dequeue_and_enqueue() self._dequeue_and_enqueue()
# sharpen self.gt again, as we have changed the self.gt with self._dequeue_and_enqueue
self.gt_usm = self.usm_sharpener(self.gt)
else: else:
self.lq = data['lq'].to(self.device) self.lq = data['lq'].to(self.device)
if 'gt' in data: if 'gt' in data:

View File

@@ -17,7 +17,7 @@ class RealESRNetModel(SRModel):
def __init__(self, opt): def __init__(self, opt):
super(RealESRNetModel, self).__init__(opt) super(RealESRNetModel, self).__init__(opt)
self.jpeger = DiffJPEG(differentiable=False).cuda() self.jpeger = DiffJPEG(differentiable=False).cuda()
self.usm_shaper = USMSharp().cuda() self.usm_sharpener = USMSharp().cuda()
self.queue_size = opt['queue_size'] self.queue_size = opt['queue_size']
@torch.no_grad() @torch.no_grad()
@@ -59,7 +59,7 @@ class RealESRNetModel(SRModel):
self.gt = data['gt'].to(self.device) self.gt = data['gt'].to(self.device)
# USM the GT images # USM the GT images
if self.opt['gt_usm'] is True: if self.opt['gt_usm'] is True:
self.gt = self.usm_shaper(self.gt) self.gt = self.usm_sharpener(self.gt)
self.kernel1 = data['kernel1'].to(self.device) self.kernel1 = data['kernel1'].to(self.device)
self.kernel2 = data['kernel2'].to(self.device) self.kernel2 = data['kernel2'].to(self.device)

17
scripts/pytorch2onnx.py Normal file
View File

@@ -0,0 +1,17 @@
import torch
import torch.onnx
from basicsr.archs.rrdbnet_arch import RRDBNet
# An instance of your model
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32)
model.load_state_dict(torch.load('experiments/pretrained_models/RealESRGAN_x4plus.pth')['params_ema'])
# set the train mode to false since we will only run the forward pass.
model.train(False)
model.cpu().eval()
# An example input you would normally provide to your model's forward() method
x = torch.rand(1, 3, 64, 64)
# Export the model
with torch.no_grad():
torch_out = torch.onnx._export(model, x, 'realesrgan-x4.onnx', opset_version=11, export_params=True)