82 Commits

Author SHA1 Message Date
Xintao
685d429c81 v0.2.5.0 2022-04-24 19:59:55 +08:00
Xintao
13c95fe094 update readme 2022-04-24 19:58:00 +08:00
wyz
82cf0e8e4a Add comparisons for the soon be released animevideo-v3 model (#301)
* add comparisons for animevideo-v3 model

* fix markdown table format

Co-authored-by: yanzewu <yanzewu@tencent.com>
2022-04-24 17:30:21 +08:00
Xintao
cddc2ff658 update readme 2022-04-24 17:27:15 +08:00
Xintao
98add035f2 support realesr-animevideov3 2022-04-24 17:22:43 +08:00
Xintao
9ff1944d06 use GFPGAN v1.3 2022-02-23 20:44:51 +08:00
Xintao
3d96c8ab9f update logo size 2022-02-16 00:39:03 +08:00
Xintao
f115f40a77 V0.2.4.0 2022-02-15 23:57:21 +08:00
Xintao
2b4e485eb0 Update ReadMe (#259)
* add logo

* update readme

* update readme

* update readme

* update updates

* update updates

* update updates

* update updates

* update updates

* update readme

* update readme

* update readme

* update readme
2022-02-15 23:50:51 +08:00
Xintao
01aeba2f7a Add CODE_OF_CONDUCT.md 2022-01-08 20:07:51 +08:00
Xintao
3e65d21817 fix ffmpeg framerate bug 2021-12-14 00:12:36 +08:00
Xintao
b7f191a9f5 update demo video 2021-12-13 17:12:34 +08:00
Xintao
e83bf0e1e4 add realesrgan anime video colab demo 2021-12-12 21:26:43 +08:00
Xintao
f07aaffda0 V0.2.3.0: add anime video models 2021-12-12 20:19:09 +08:00
Xintao
20355e0c79 Update readme for anime video models; add video demo (#181)
* update readme

* update readme

* update readme

* update readme

* update readme

* update readme

* update readme

* update readme

* update readme
2021-12-12 20:17:30 +08:00
Xintao
192f672f91 add inference_realesrgan_video 2021-12-12 16:49:35 +08:00
Xintao
696e1a6741 add SRVGGNetCompact arch, update inference 2021-12-12 13:29:21 +08:00
Xintao
3e0085aeda V0.2.2.6 2021-12-09 17:41:19 +08:00
Xintao
42110857ef add unittest for model and utils 2021-11-28 19:54:19 +08:00
Xintao
1d180efaf3 add unittest for dataset and archs 2021-11-28 15:59:14 +08:00
Xintao
7dd860a881 catch more specific errors 2021-11-24 00:14:05 +08:00
Xintao
35ee6f781e improve codes comments 2021-11-23 00:52:00 +08:00
Xintao
c9023b3d7a Update README_CN.md (#142)
* update contribution

* updte readme

* updte readme

* update readme-cn

* update readme-cn

* update readme-cn

* update readme-cn

* update readme-cn
2021-11-01 19:16:48 +08:00
Asiimoviet
fb79d65ff3 Added Chinese README (#126)
* Added Chinese README

* Update README_CN.md

* Create README_CN.md
2021-11-01 17:00:06 +08:00
Xintao
3338b31f48 update setup.py, V0.2.2.5 2021-10-22 17:16:43 +08:00
Xintao
501efe3da6 update ReadMe 2021-10-17 01:33:45 +08:00
Xintao
8beb7ed17d add feedback of anime models 2021-10-17 01:27:04 +08:00
Xintao
e2d30f9ea4 update readme: add usage guidance 2021-10-17 01:03:55 +08:00
Xintao
d715e3d26a update readme 2021-10-16 23:30:57 +08:00
Xintao
772923e207 add codespell to pre-commit hook 2021-09-27 15:35:37 +08:00
Christian Clauss
14247a89d9 Fix typos discovered by codespell (#95)
* Improve performance

* !fixup Fix typo discovered by codespell

* fixup! Fix typo discovered by codespell

* fixup! Add codespell to lint process
2021-09-27 14:53:03 +08:00
Xintao
aa584e05bc minor updates on Training.md 2021-09-17 10:30:52 +08:00
Xintao
b525d1793b add trainining with one gpu 2021-09-17 10:13:25 +08:00
Xintao
0ad2e9c61e set num_gpu to auto in options 2021-09-17 10:07:09 +08:00
Xintao
90ddf13b5e Merge branch 'master' of github.com:xinntao/Real-ESRGAN 2021-09-07 21:28:03 +08:00
Xintao
8675208bc9 update: format and standard 2021-09-07 21:27:45 +08:00
Pratik Goyal
8f8536b6d1 Minor spelling correction (#67) 2021-09-03 14:38:03 +08:00
Xintao
f83472d011 version 0.2.2.4 2021-09-01 00:19:21 +08:00
Xintao
e1b8832f1b Update README for Real-ESRGAN-anime model (#62)
* try to add video mp4

* update

* update readme

* update readme

* update readme

* update readme

* update readme
2021-09-01 00:18:07 +08:00
Xintao
6ff747174d adapt Real-ESRGAN-anime model 2021-08-31 22:28:30 +08:00
Xintao
c1669c4b0a support model config during inference 2021-08-31 19:58:40 +08:00
Xintao
18a9c386a8 update readme 2021-08-30 00:02:27 +08:00
Xintao
b659608fb3 add contributing 2021-08-29 23:55:39 +08:00
Xintao
ee5f7b34cb update open issues 2021-08-29 11:28:15 +08:00
Xintao
3ce826cabe fix import bug in setup.py 2021-08-28 13:26:09 +08:00
Xintao
2c20a354b6 add check arg 2021-08-28 13:20:10 +08:00
Xintao
3e1f780f51 update readme 2021-08-27 16:25:38 +08:00
Xintao
f5ccd64ce5 support finetune with paired data 2021-08-27 16:14:48 +08:00
Xintao
194c2c14b3 updte readme 2021-08-26 23:12:00 +08:00
Xintao
0fcb49a299 add extract_subimages 2021-08-26 22:55:56 +08:00
Xintao
9976a34454 update pypi, version 0.2.2.3 2021-08-26 22:27:19 +08:00
Xintao
424a09457b v0.2.2.2 2021-08-26 22:16:20 +08:00
Xintao
52f77e74a8 update publish-pip 2021-08-26 22:13:45 +08:00
Xintao
bfa4678bef add finetune_realesrgan_x4plus, version 0.2.2.1 2021-08-26 22:06:22 +08:00
Xintao
68f9f2445e add generate_meta_info 2021-08-24 22:20:10 +08:00
Xintao
7840a3d16a add generate_multiscale_DF2K 2021-08-24 21:51:00 +08:00
Xintao
b28958cdf2 update readme 2021-08-22 18:20:33 +08:00
Xintao
667e34e7ca support face enhance 2021-08-22 18:09:28 +08:00
AK391
978def19a6 Huggingface Gradio Web Demo (#47)
* Create gradiodemo.py

* Update requirements.txt

* Update gradiodemo.py

* Update requirements.txt

* Update requirements.txt

* Update README.md

* Update README.md

* Delete gradiodemo.py

* Update requirements.txt
2021-08-22 18:04:58 +08:00
Xintao
a7153c7fce add x2 options 2021-08-22 11:47:45 +08:00
Xintao
00116244cb minor updates 2021-08-22 11:08:11 +08:00
Xintao
571b89257a add no-response workflow, vscode format setting, update requirements 2021-08-18 10:50:27 +08:00
Xintao
bed7df7d99 minor update 2021-08-10 20:00:50 +08:00
Xintao
fb3ff055e4 update readme 2021-08-09 02:11:23 +08:00
Xintao
9ef97853f9 update readme 2021-08-09 02:10:15 +08:00
Xintao
58fea8db69 use warnings 2021-08-09 01:14:54 +08:00
Xintao
3c6cf5290e update readme; add faq.md 2021-08-08 21:42:32 +08:00
Xintao
64ad194dda Support outscale; Add RealESRGANx2 model; Version 0.2.1 2021-08-08 21:30:51 +08:00
Xintao
5745599813 update readme: add pypi workflow badge 2021-08-08 16:41:50 +08:00
Xintao
3ce0c97e89 update readme: add pip workflow badge 2021-08-08 16:35:38 +08:00
Xintao
13186ac2c2 Merge pull request #18 from xinntao/pypi
Support PyPI
2021-08-08 16:29:11 +08:00
Xintao
4356ba0578 update readme 2021-08-08 16:26:09 +08:00
Xintao
32a4fa1772 add publish-pip action 2021-08-08 16:20:01 +08:00
Xintao
18ebf723f2 adaption for pypi 2021-08-08 16:12:56 +08:00
Xintao
1f83ce5432 update .gitignore 2021-08-08 15:29:33 +08:00
Xintao
bef5e3cabd update for running 2021-08-08 15:27:55 +08:00
Xintao
064df9956b add setup.py 2021-08-08 15:13:53 +08:00
Xintao
52eab16d11 update .gitignore 2021-08-08 15:01:08 +08:00
Xintao
94ae626008 regroup 2021-08-08 14:50:32 +08:00
Xintao
9baa0b3d00 regroup files 2021-08-08 14:46:43 +08:00
Xintao
f932289af1 support half inference 2021-08-01 12:10:35 +08:00
Xintao
f59a0c66ec Add Linux/MacOS executable files 2021-08-01 02:35:36 +08:00
77 changed files with 3835 additions and 330 deletions

33
.github/workflows/no-response.yml vendored Normal file
View File

@@ -0,0 +1,33 @@
name: No Response
# TODO: it seems not to work
# Modified from: https://raw.githubusercontent.com/github/docs/main/.github/workflows/no-response.yaml
# **What it does**: Closes issues that don't have enough information to be actionable.
# **Why we have it**: To remove the need for maintainers to remember to check back on issues periodically
# to see if contributors have responded.
# **Who does it impact**: Everyone that works on docs or docs-internal.
on:
issue_comment:
types: [created]
schedule:
# Schedule for five minutes after the hour every hour
- cron: '5 * * * *'
jobs:
noResponse:
runs-on: ubuntu-latest
steps:
- uses: lee-dohm/no-response@v0.5.0
with:
token: ${{ github.token }}
closeComment: >
This issue has been automatically closed because there has been no response
to our request for more information from the original author. With only the
information that is currently in the issue, we don't have enough information
to take action. Please reach out if you have or find the answers we need so
that we can investigate further.
If you still have questions, please improve your description and re-open it.
Thanks :-)

33
.github/workflows/publish-pip.yml vendored Normal file
View File

@@ -0,0 +1,33 @@
name: PyPI Publish
on: push
jobs:
build-n-publish:
runs-on: ubuntu-latest
if: startsWith(github.event.ref, 'refs/tags')
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.8
uses: actions/setup-python@v1
with:
python-version: 3.8
- name: Upgrade pip
run: pip install pip --upgrade
- name: Install PyTorch (cpu)
run: pip install torch==1.7.0+cpu torchvision==0.8.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
- name: Install dependencies
run: |
pip install basicsr
pip install facexlib
pip install gfpgan
pip install -r requirements.txt
- name: Build and install
run: rm -rf .eggs && pip install -e .
- name: Build for distribution
run: python setup.py sdist bdist_wheel
- name: Publish distribution to PyPI
uses: pypa/gh-action-pypi-publish@master
with:
password: ${{ secrets.PYPI_API_TOKEN }}

View File

@@ -20,11 +20,12 @@ jobs:
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 yapf isort
pip install codespell flake8 isort yapf
# modify the folders accordingly
- name: Lint
run: |
codespell
flake8 .
isort --check-only --diff data/ models/ inference_realesrgan.py
yapf -r -d data/ models/ inference_realesrgan.py
isort --check-only --diff realesrgan/ scripts/ inference_realesrgan.py setup.py
yapf -r -d realesrgan/ scripts/ inference_realesrgan.py setup.py

11
.gitignore vendored
View File

@@ -1,4 +1,13 @@
.vscode
# ignored folders
datasets/*
experiments/*
results/*
tb_logger/*
wandb/*
tmp/*
realesrgan/weights/*
version.py
# Byte-compiled / optimized / DLL files
__pycache__/

View File

@@ -24,6 +24,12 @@ repos:
hooks:
- id: yapf
# codespell
- repo: https://github.com/codespell-project/codespell
rev: v2.1.0
hooks:
- id: codespell
# pre-commit-hooks
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.2.0

19
.vscode/settings.json vendored Normal file
View File

@@ -0,0 +1,19 @@
{
"files.trimTrailingWhitespace": true,
"editor.wordWrap": "on",
"editor.rulers": [
80,
120
],
"editor.renderWhitespace": "all",
"editor.renderControlCharacters": true,
"python.formatting.provider": "yapf",
"python.formatting.yapfArgs": [
"--style",
"{BASED_ON_STYLE = pep8, BLANK_LINE_BEFORE_NESTED_CLASS_OR_DEF = true, SPLIT_BEFORE_EXPRESSION_AFTER_OPENING_PAREN = true, COLUMN_LIMIT = 120}"
],
"python.linting.flake8Enabled": true,
"python.linting.flake8Args": [
"max-line-length=120"
],
}

128
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,128 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
xintao.wang@outlook.com or xintaowang@tencent.com.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.

41
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,41 @@
# Contributing to Real-ESRGAN
We like open-source and want to develop practical algorithms for general image restoration. However, individual strength is limited. So, any kinds of contributions are welcome, such as:
- New features
- New models (your fine-tuned models)
- Bug fixes
- Typo fixes
- Suggestions
- Maintenance
- Documents
- *etc*
## Workflow
1. Fork and pull the latest Real-ESRGAN repository
1. Checkout a new branch (do not use master branch for PRs)
1. Commit your changes
1. Create a PR
**Note**:
1. Please check the code style and linting
1. The style configuration is specified in [setup.cfg](setup.cfg)
1. If you use VSCode, the settings are configured in [.vscode/settings.json](.vscode/settings.json)
1. Strongly recommend using `pre-commit hook`. It will check your code style and linting before your commit.
1. In the root path of project folder, run `pre-commit install`
1. The pre-commit configuration is listed in [.pre-commit-config.yaml](.pre-commit-config.yaml)
1. Better to [open a discussion](https://github.com/xinntao/Real-ESRGAN/discussions) before large changes.
1. Welcome to discuss :sunglasses:. I will try my best to join the discussion.
## TODO List
:zero: The most straightforward way of improving model performance is to fine-tune on some specific datasets.
Here are some TODOs:
- [ ] optimize for human faces
- [ ] optimize for texts
- [ ] support controllable restoration strength
:one: There are also [several issues](https://github.com/xinntao/Real-ESRGAN/issues) that require helpers to improve. If you can help, please let me know :smile:

7
FAQ.md Normal file
View File

@@ -0,0 +1,7 @@
# FAQ
1. **How to select models?**<br>
A: Please refer to [docs/model_zoo.md](docs/model_zoo.md)
1. **Can `face_enhance` be used for anime images/animation videos?**<br>
A: No, it can only be used for real faces. It is recommended not to use this option for anime images/animation videos to save GPU memory.

8
MANIFEST.in Normal file
View File

@@ -0,0 +1,8 @@
include assets/*
include inputs/*
include scripts/*.py
include inference_realesrgan.py
include VERSION
include LICENSE
include requirements.txt
include realesrgan/weights/README.md

197
README.md
View File

@@ -1,27 +1,94 @@
# Real-ESRGAN
<p align="center">
<img src="assets/realesrgan_logo.png" height=120>
</p>
## <div align="center"><b><a href="README.md">English</a> | <a href="README_CN.md">简体中文</a></b></div>
[![download](https://img.shields.io/github/downloads/xinntao/Real-ESRGAN/total.svg)](https://github.com/xinntao/Real-ESRGAN/releases)
[![Open issue](https://isitmaintained.com/badge/open/xinntao/Real-ESRGAN.svg)](https://github.com/xinntao/Real-ESRGAN/issues)
[![PyPI](https://img.shields.io/pypi/v/realesrgan)](https://pypi.org/project/realesrgan/)
[![Open issue](https://img.shields.io/github/issues/xinntao/Real-ESRGAN)](https://github.com/xinntao/Real-ESRGAN/issues)
[![Closed issue](https://img.shields.io/github/issues-closed/xinntao/Real-ESRGAN)](https://github.com/xinntao/Real-ESRGAN/issues)
[![LICENSE](https://img.shields.io/github/license/xinntao/Real-ESRGAN.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/LICENSE)
[![python lint](https://github.com/xinntao/Real-ESRGAN/actions/workflows/pylint.yml/badge.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/.github/workflows/pylint.yml)
[![Publish-pip](https://github.com/xinntao/Real-ESRGAN/actions/workflows/publish-pip.yml/badge.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/.github/workflows/publish-pip.yml)
:fire: Update the **RealESRGAN AnimeVideo-v3** model **更新动漫视频的小模型**. Please see [anime video models](docs/anime_video_model.md) and [comparisons](docs/anime_comparisons.md) for more details.
1. [Colab Demo](https://colab.research.google.com/drive/1k2Zod6kSHEvraybHl50Lys0LerhyTMCo?usp=sharing) for Real-ESRGAN <a href="https://colab.research.google.com/drive/1k2Zod6kSHEvraybHl50Lys0LerhyTMCo?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>.
2. [Portable Windows executable file](https://github.com/xinntao/Real-ESRGAN/releases). You can find more information [here](#Portable-executable-files).
2. [Colab Demo](https://colab.research.google.com/drive/1yNl9ORUxxlL4N0keJa2SEPB61imPQd1B?usp=sharing) for Real-ESRGAN (**anime videos**) <a href="https://colab.research.google.com/drive/1yNl9ORUxxlL4N0keJa2SEPB61imPQd1B?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>.
3. Portable [Windows](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-windows.zip) / [Linux](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-ubuntu.zip) / [MacOS](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-macos.zip) **executable files for Intel/AMD/Nvidia GPU**. You can find more information [here](#Portable-executable-files). The ncnn implementation is in [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan).
Real-ESRGAN aims at developing **Practical Algorithms for General Image Restoration**.<br>
Real-ESRGAN aims at developing **Practical Algorithms for General Image/Video Restoration**.<br>
We extend the powerful ESRGAN to a practical restoration application (namely, Real-ESRGAN), which is trained with pure synthetic data.
:triangular_flag_on_post: **Updates**
:art: Real-ESRGAN needs your contributions. Any contributions are welcome, such as new features/models/typo fixes/suggestions/maintenance, *etc*. See [CONTRIBUTING.md](CONTRIBUTING.md). All contributors are list [here](README.md#hugs-acknowledgement).
- :white_check_mark: [The inference code](inference_realesrgan.py) supports: 1) **tile** options; 2) images with **alpha channel**; 3) **gray** images; 4) **16-bit** images.
- :white_check_mark: The training codes have been released. A detailed guide can be found in [Training.md](Training.md).
:question: Frequently Asked Questions can be found in [FAQ.md](FAQ.md).
:milky_way: Thanks for your valuable feedbacks/suggestions. All the feedbacks are updated in [feedback.md](feedback.md).
---
If Real-ESRGAN is helpful in your photos/projects, please help to :star: this repo or recommend it to your friends. Thanks:blush: <br>
Other recommended projects:<br>
:arrow_forward: [GFPGAN](https://github.com/TencentARC/GFPGAN): A practical algorithm for real-world face restoration <br>
:arrow_forward: [BasicSR](https://github.com/xinntao/BasicSR): An open-source image and video restoration toolbox<br>
:arrow_forward: [facexlib](https://github.com/xinntao/facexlib): A collection that provides useful face-relation functions.<br>
:arrow_forward: [HandyView](https://github.com/xinntao/HandyView): A PyQt5-based image viewer that is handy for view and comparison. <br>
---
<!---------------------------------- Updates --------------------------->
<details>
<summary>🚩<b>Updates</b></summary>
- ✅ Update the **RealESRGAN AnimeVideo-v3** model. Please see [anime video models](docs/anime_video_model.md) and [comparisons](docs/anime_comparisons.md) for more details.
- ✅ Add small models for anime videos. More details are in [anime video models](docs/anime_video_model.md).
- ✅ Add the ncnn implementation [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan).
- ✅ Add [*RealESRGAN_x4plus_anime_6B.pth*](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth), which is optimized for **anime** images with much smaller model size. More details and comparisons with [waifu2x](https://github.com/nihui/waifu2x-ncnn-vulkan) are in [**anime_model.md**](docs/anime_model.md)
- ✅ Support finetuning on your own data or paired data (*i.e.*, finetuning ESRGAN). See [here](Training.md#Finetune-Real-ESRGAN-on-your-own-dataset)
- ✅ Integrate [GFPGAN](https://github.com/TencentARC/GFPGAN) to support **face enhancement**.
- ✅ Integrated to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See [Gradio Web Demo](https://huggingface.co/spaces/akhaliq/Real-ESRGAN). Thanks [@AK391](https://github.com/AK391)
- ✅ Support arbitrary scale with `--outscale` (It actually further resizes outputs with `LANCZOS4`). Add *RealESRGAN_x2plus.pth* model.
- ✅ [The inference code](inference_realesrgan.py) supports: 1) **tile** options; 2) images with **alpha channel**; 3) **gray** images; 4) **16-bit** images.
- ✅ The training codes have been released. A detailed guide can be found in [Training.md](Training.md).
</details>
<!---------------------------------- Projects that use Real-ESRGAN --------------------------->
<details>
<summary>🧩<b>Projects that use Real-ESRGAN</b></summary>
&nbsp;&nbsp;&nbsp;&nbsp;👋 If you develop/use Real-ESRGAN in your projects, welcome to let me know.
- NCNN-Android: [RealSR-NCNN-Android](https://github.com/tumuyan/RealSR-NCNN-Android) by [tumuyan](https://github.com/tumuyan)
- VapourSynth: [vs-realesrgan](https://github.com/HolyWu/vs-realesrgan) by [HolyWu](https://github.com/HolyWu)
- NCNN: [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan)
&nbsp;&nbsp;&nbsp;&nbsp;**GUI**
- [Waifu2x-Extension-GUI](https://github.com/AaronFeng753/Waifu2x-Extension-GUI) by [AaronFeng753](https://github.com/AaronFeng753)
- [Squirrel-RIFE](https://github.com/Justin62628/Squirrel-RIFE) by [Justin62628](https://github.com/Justin62628)
- [Real-GUI](https://github.com/scifx/Real-GUI) by [scifx](https://github.com/scifx)
- [Real-ESRGAN_GUI](https://github.com/net2cn/Real-ESRGAN_GUI) by [net2cn](https://github.com/net2cn)
- [Real-ESRGAN-EGUI](https://github.com/WGzeyu/Real-ESRGAN-EGUI) by [WGzeyu](https://github.com/WGzeyu)
- [anime_upscaler](https://github.com/shangar21/anime_upscaler) by [shangar21](https://github.com/shangar21)
</details>
<!---------------------------------- Demo videos --------------------------->
<details open>
<summary>👀<b>Demo videos</b></summary>
- [大闹天宫片段](https://www.bilibili.com/video/BV1ja41117zb)
</details>
### :book: Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data
> [[Paper](https://arxiv.org/abs/2107.10833)] &emsp; [Project Page] &emsp; [Demo] <br>
> [[Paper](https://arxiv.org/abs/2107.10833)] &emsp; [Project Page] &emsp; [[YouTube Video](https://www.youtube.com/watch?v=fxHWoDSSvSc)] &emsp; [[B站讲解](https://www.bilibili.com/video/BV1H34y1m7sS/)] &emsp; [[Poster](https://xinntao.github.io/projects/RealESRGAN_src/RealESRGAN_poster.pdf)] &emsp; [[PPT slides](https://docs.google.com/presentation/d/1QtW6Iy8rm8rGLsJ0Ldti6kP-7Qyzy6XL/edit?usp=sharing&ouid=109799856763657548160&rtpof=true&sd=true)]<br>
> [Xintao Wang](https://xinntao.github.io/), Liangbin Xie, [Chao Dong](https://scholar.google.com.hk/citations?user=OSDCB0UAAAAJ), [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ&hl=en) <br>
> Applied Research Center (ARC), Tencent PCG<br>
> Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
> Tencent ARC Lab; Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
<p align="center">
<img src="assets/teaser.jpg">
@@ -41,7 +108,7 @@ Here is a TODO list in the near future:
- [ ] optimize for human faces
- [ ] optimize for texts
- [ ] optimize for animation images
- [x] optimize for anime images
- [ ] support more scales
- [ ] support controllable restoration strength
@@ -52,27 +119,48 @@ If you have some images that Real-ESRGAN could not well restored, please also op
### Portable executable files
You can download **Windows executable files** from https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRGAN-ncnn-vulkan-20210725-windows.zip
You can download [Windows](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-windows.zip) / [Linux](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-ubuntu.zip) / [MacOS](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-macos.zip) **executable files for Intel/AMD/Nvidia GPU**.
This executable file is **portable** and includes all the binaries and models required. No CUDA or PyTorch environment is needed.<br>
You can simply run the following command:
You can simply run the following command (the Windows example, more information is in the README.md of each executable files):
```bash
./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png
./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n model_name
```
We have provided three models:
We have provided five models:
1. realesrgan-x4plus (default)
2. realesrnet-x4plus
3. esrgan-x4
3. realesrgan-x4plus-anime (optimized for anime images, small model size)
4. realesr-animevideov3 (animation video)
You can use the `-n` argument for other models, for example, `./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n realesrnet-x4plus`
Note that it may introduce block inconsistency (and also generate slightly different results from the PyTorch implementation), because this executable file first crops the input image into several tiles, and then processes them separately, finally stitches together.
### Usage of executable files
This executable file is based on the wonderful [Tencent/ncnn](https://github.com/Tencent/ncnn) and [realsr-ncnn-vulkan](https://github.com/nihui/realsr-ncnn-vulkan) by [nihui](https://github.com/nihui).
1. Please refer to [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan#computer-usages) for more details.
1. Note that it does not support all the functions (such as `outscale`) as the python script `inference_realesrgan.py`.
```console
Usage: realesrgan-ncnn-vulkan.exe -i infile -o outfile [options]...
-h show this help
-i input-path input image path (jpg/png/webp) or directory
-o output-path output image path (jpg/png/webp) or directory
-s scale upscale ratio (can be 2, 3, 4. default=4)
-t tile-size tile size (>=32/0=auto, default=0) can be 0,0,0 for multi-gpu
-m model-path folder path to the pre-trained models. default=models
-n model-name model name (default=realesr-animevideov3, can be realesr-animevideov3 | realesrgan-x4plus | realesrgan-x4plus-anime | realesrnet-x4plus)
-g gpu-id gpu device to use (default=auto) can be 0,1,2 for multi-gpu
-j load:proc:save thread count for load/proc/save (default=1:2:2) can be 1:2,2,2:2 for multi-gpu
-x enable tta mode"
-f format output image format (jpg/png/webp, default=ext/png)
-v verbose output
```
Note that it may introduce block inconsistency (and also generate slightly different results from the PyTorch implementation), because this executable file first crops the input image into several tiles, and then processes them separately, finally stitches together.
---
@@ -96,14 +184,18 @@ This executable file is based on the wonderful [Tencent/ncnn](https://github.com
# Install basicsr - https://github.com/xinntao/BasicSR
# We use BasicSR for both training and inference
pip install basicsr
# facexlib and gfpgan are for face enhancement
pip install facexlib
pip install gfpgan
pip install -r requirements.txt
python setup.py develop
```
## :zap: Quick Inference
Download pre-trained models: [RealESRGAN_x4plus.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth)
### Inference general images
Download pretrained models:
Download pre-trained models: [RealESRGAN_x4plus.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth)
```bash
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P experiments/pretrained_models
@@ -112,30 +204,75 @@ wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_
Inference!
```bash
python inference_realesrgan.py --model_path experiments/pretrained_models/RealESRGAN_x4plus.pth --input inputs
python inference_realesrgan.py -n RealESRGAN_x4plus -i inputs --face_enhance
```
Results are in the `results` folder
### Inference anime images
<p align="center">
<img src="https://raw.githubusercontent.com/xinntao/public-figures/master/Real-ESRGAN/cmp_realesrgan_anime_1.png">
</p>
Pre-trained models: [RealESRGAN_x4plus_anime_6B](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth)<br>
More details and comparisons with [waifu2x](https://github.com/nihui/waifu2x-ncnn-vulkan) are in [**anime_model.md**](docs/anime_model.md)
```bash
# download model
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth -P experiments/pretrained_models
# inference
python inference_realesrgan.py -n RealESRGAN_x4plus_anime_6B -i inputs
```
Results are in the `results` folder
### Usage of python script
1. You can use X4 model for **arbitrary output size** with the argument `outscale`. The program will further perform cheap resize operation after the Real-ESRGAN output.
```console
Usage: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile -o outfile [options]...
A common command: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile --outscale 3.5 --face_enhance
-h show this help
-i --input Input image or folder. Default: inputs
-o --output Output folder. Default: results
-n --model_name Model name. Default: RealESRGAN_x4plus
-s, --outscale The final upsampling scale of the image. Default: 4
--suffix Suffix of the restored image. Default: out
-t, --tile Tile size, 0 for no tile during testing. Default: 0
--face_enhance Whether to use GFPGAN to enhance face. Default: False
--fp32 Use fp32 precision during inference. Default: fp16 (half precision).
--ext Image extension. Options: auto | jpg | png, auto means using the same extension as inputs. Default: auto
```
## :european_castle: Model Zoo
- [RealESRGAN-x4plus](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth)
- [RealESRNet-x4plus](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRNet_x4plus.pth)
- [official ESRGAN-x4](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth)
Please see [docs/model_zoo.md](docs/model_zoo.md)
## :computer: Training
## :computer: Training and Finetuning on your own dataset
A detailed guide can be found in [Training.md](Training.md).
## BibTeX
@Article{wang2021realesrgan,
title={Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data},
author={Xintao Wang and Liangbin Xie and Chao Dong and Ying Shan},
journal={arXiv:2107.10833},
year={2021}
@InProceedings{wang2021realesrgan,
author = {Xintao Wang and Liangbin Xie and Chao Dong and Ying Shan},
title = {Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data},
booktitle = {International Conference on Computer Vision Workshops (ICCVW)},
date = {2021}
}
## :e-mail: Contact
If you have any question, please email `xintao.wang@outlook.com` or `xintaowang@tencent.com`.
## :hugs: Acknowledgement
Thanks for all the contributors.
- [AK391](https://github.com/AK391): Integrate RealESRGAN to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See [Gradio Web Demo](https://huggingface.co/spaces/akhaliq/Real-ESRGAN).
- [Asiimoviet](https://github.com/Asiimoviet): Translate the README.md to Chinese (中文).
- [2ji3150](https://github.com/2ji3150): Thanks for the [detailed and valuable feedbacks/suggestions](https://github.com/xinntao/Real-ESRGAN/issues/131).

275
README_CN.md Normal file
View File

@@ -0,0 +1,275 @@
<p align="center">
<img src="assets/realesrgan_logo.png" height=120>
</p>
## <div align="center"><b><a href="README.md">English</a> | <a href="README_CN.md">简体中文</a></b></div>
[![download](https://img.shields.io/github/downloads/xinntao/Real-ESRGAN/total.svg)](https://github.com/xinntao/Real-ESRGAN/releases)
[![PyPI](https://img.shields.io/pypi/v/realesrgan)](https://pypi.org/project/realesrgan/)
[![Open issue](https://img.shields.io/github/issues/xinntao/Real-ESRGAN)](https://github.com/xinntao/Real-ESRGAN/issues)
[![Closed issue](https://img.shields.io/github/issues-closed/xinntao/Real-ESRGAN)](https://github.com/xinntao/Real-ESRGAN/issues)
[![LICENSE](https://img.shields.io/github/license/xinntao/Real-ESRGAN.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/LICENSE)
[![python lint](https://github.com/xinntao/Real-ESRGAN/actions/workflows/pylint.yml/badge.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/.github/workflows/pylint.yml)
[![Publish-pip](https://github.com/xinntao/Real-ESRGAN/actions/workflows/publish-pip.yml/badge.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/.github/workflows/publish-pip.yml)
:fire: 更新动漫视频的小模型 **RealESRGAN AnimeVideo-v3**. 更多信息在 [anime video models](docs/anime_video_model.md) 和 [comparisons](docs/anime_comparisons.md)中.
1. Real-ESRGAN的[Colab Demo](https://colab.research.google.com/drive/1k2Zod6kSHEvraybHl50Lys0LerhyTMCo?usp=sharing) <a href="https://colab.research.google.com/drive/1k2Zod6kSHEvraybHl50Lys0LerhyTMCo?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>.
2. Real-ESRGAN的 **动漫视频** 的[Colab Demo](https://colab.research.google.com/drive/1yNl9ORUxxlL4N0keJa2SEPB61imPQd1B?usp=sharing) <a href="https://colab.research.google.com/drive/1yNl9ORUxxlL4N0keJa2SEPB61imPQd1B?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>.
3. **支持Intel/AMD/Nvidia显卡**的绿色版exe文件 [Windows版](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-windows.zip) / [Linux版](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-ubuntu.zip) / [macOS版](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-macos.zip),详情请移步[这里](#便携版(绿色版)可执行文件)。NCNN的实现在 [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan)。
Real-ESRGAN 的目标是开发出**实用的图像/视频修复算法**。<br>
我们在 ESRGAN 的基础上使用纯合成的数据来进行训练以使其能被应用于实际的图片修复的场景顾名思义Real-ESRGAN
:art: Real-ESRGAN 需要也很欢迎你的贡献如新功能、模型、bug修复、建议、维护等等。详情可以查看[CONTRIBUTING.md](CONTRIBUTING.md),所有的贡献者都会被列在[此处](README_CN.md#hugs-感谢)。
:milky_way: 感谢大家提供了很好的反馈。这些反馈会逐步更新在 [这个文档](feedback.md)。
:question: 常见的问题可以在[FAQ.md](FAQ.md)中找到答案。(好吧,现在还是空白的=-=||
---
如果 Real-ESRGAN 对你有帮助,可以给本项目一个 Star :star: ,或者推荐给你的朋友们,谢谢!:blush: <br/>
其他推荐的项目:<br/>
:arrow_forward: [GFPGAN](https://github.com/TencentARC/GFPGAN): 实用的人脸复原算法 <br>
:arrow_forward: [BasicSR](https://github.com/xinntao/BasicSR): 开源的图像和视频工具箱<br>
:arrow_forward: [facexlib](https://github.com/xinntao/facexlib): 提供与人脸相关的工具箱<br>
:arrow_forward: [HandyView](https://github.com/xinntao/HandyView): 基于PyQt5的图片查看器方便查看以及比较 <br>
---
<!---------------------------------- Updates --------------------------->
<details>
<summary>🚩<b>更新</b></summary>
- ✅ 更新动漫视频的小模型 **RealESRGAN AnimeVideo-v3**. 更多信息在 [anime video models](docs/anime_video_model.md) 和 [comparisons](docs/anime_comparisons.md)中.
- ✅ 添加了针对动漫视频的小模型, 更多信息在 [anime video models](docs/anime_video_model.md) 中.
- ✅ 添加了ncnn 实现:[Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan).
- ✅ 添加了 [*RealESRGAN_x4plus_anime_6B.pth*](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth)对二次元图片进行了优化并减少了model的大小。详情 以及 与[waifu2x](https://github.com/nihui/waifu2x-ncnn-vulkan)的对比请查看[**anime_model.md**](docs/anime_model.md)
- ✅支持用户在自己的数据上进行微调 (finetune)[详情](Training.md#Finetune-Real-ESRGAN-on-your-own-dataset)
- ✅ 支持使用[GFPGAN](https://github.com/TencentARC/GFPGAN)**增强人脸**
- ✅ 通过[Gradio](https://github.com/gradio-app/gradio)添加到了[Huggingface Spaces](https://huggingface.co/spaces)(一个机器学习应用的在线平台):[Gradio在线版](https://huggingface.co/spaces/akhaliq/Real-ESRGAN)。感谢[@AK391](https://github.com/AK391)
- ✅ 支持任意比例的缩放:`--outscale`(实际上使用`LANCZOS4`来更进一步调整输出图像的尺寸)。添加了*RealESRGAN_x2plus.pth*模型
- ✅ [推断脚本](inference_realesrgan.py)支持: 1) 分块处理**tile**; 2) 带**alpha通道**的图像; 3) **灰色**图像; 4) **16-bit**图像.
- ✅ 训练代码已经发布,具体做法可查看:[Training.md](Training.md)。
</details>
<!---------------------------------- Projects that use Real-ESRGAN --------------------------->
<details>
<summary>🧩<b>使用Real-ESRGAN的项目</b></summary>
&nbsp;&nbsp;&nbsp;&nbsp;👋 如果你开发/使用/集成了Real-ESRGAN, 欢迎联系我添加
- NCNN-Android: [RealSR-NCNN-Android](https://github.com/tumuyan/RealSR-NCNN-Android) by [tumuyan](https://github.com/tumuyan)
- VapourSynth: [vs-realesrgan](https://github.com/HolyWu/vs-realesrgan) by [HolyWu](https://github.com/HolyWu)
- NCNN: [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan)
&nbsp;&nbsp;&nbsp;&nbsp;**易用的图形界面**
- [Waifu2x-Extension-GUI](https://github.com/AaronFeng753/Waifu2x-Extension-GUI) by [AaronFeng753](https://github.com/AaronFeng753)
- [Squirrel-RIFE](https://github.com/Justin62628/Squirrel-RIFE) by [Justin62628](https://github.com/Justin62628)
- [Real-GUI](https://github.com/scifx/Real-GUI) by [scifx](https://github.com/scifx)
- [Real-ESRGAN_GUI](https://github.com/net2cn/Real-ESRGAN_GUI) by [net2cn](https://github.com/net2cn)
- [Real-ESRGAN-EGUI](https://github.com/WGzeyu/Real-ESRGAN-EGUI) by [WGzeyu](https://github.com/WGzeyu)
- [anime_upscaler](https://github.com/shangar21/anime_upscaler) by [shangar21](https://github.com/shangar21)
</details>
<details>
<summary>👀<b>Demo视频B站</b></summary>
- [大闹天宫片段](https://www.bilibili.com/video/BV1ja41117zb)
</details>
### :book: Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data
> [[论文](https://arxiv.org/abs/2107.10833)] &emsp; [项目主页] &emsp; [[YouTube 视频](https://www.youtube.com/watch?v=fxHWoDSSvSc)] &emsp; [[B站视频](https://www.bilibili.com/video/BV1H34y1m7sS/)] &emsp; [[Poster](https://xinntao.github.io/projects/RealESRGAN_src/RealESRGAN_poster.pdf)] &emsp; [[PPT](https://docs.google.com/presentation/d/1QtW6Iy8rm8rGLsJ0Ldti6kP-7Qyzy6XL/edit?usp=sharing&ouid=109799856763657548160&rtpof=true&sd=true)]<br>
> [Xintao Wang](https://xinntao.github.io/), Liangbin Xie, [Chao Dong](https://scholar.google.com.hk/citations?user=OSDCB0UAAAAJ), [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ&hl=en) <br>
> Tencent ARC Lab; Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
<p align="center">
<img src="assets/teaser.jpg">
</p>
---
我们提供了一套训练好的模型(*RealESRGAN_x4plus.pth*)可以进行4倍的超分辨率。<br>
**现在的 Real-ESRGAN 还是有几率失败的,因为现实生活的降质过程比较复杂。**<br>
而且,本项目对**人脸以及文字之类**的效果还不是太好,但是我们会持续进行优化的。<br>
Real-ESRGAN 将会被长期支持,我会在空闲的时间中持续维护更新。
这些是未来计划的几个新功能:
- [ ] 优化人脸
- [ ] 优化文字
- [x] 优化动画图像
- [ ] 支持更多的超分辨率比例
- [ ] 可调节的复原
如果你有好主意或需求,欢迎在 issue 或 discussion 中提出。<br/>
如果你有一些 Real-ESRGAN 中有问题的照片,你也可以在 issue 或者 discussion 中发出来。我会留意(但是不一定能解决:stuck_out_tongue:)。如果有必要的话,我还会专门开一页来记录那些有待解决的图像。
---
### 便携版(绿色版)可执行文件
你可以下载**支持Intel/AMD/Nvidia显卡**的绿色版exe文件 [Windows版](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-windows.zip) / [Linux版](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-ubuntu.zip) / [macOS版](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-macos.zip)。
绿色版指的是这些exe你可以直接运行放U盘里拷走都没问题因为里面已经有所需的文件和模型了。它不需要 CUDA 或者 PyTorch运行环境。<br>
你可以通过下面这个命令来运行Windows版本的例子更多信息请查看对应版本的README.md
```bash
./realesrgan-ncnn-vulkan.exe -i 输入图像.jpg -o 输出图像.png -n 模型名字
```
我们提供了五种模型:
1. realesrgan-x4plus默认
2. reaesrnet-x4plus
3. realesrgan-x4plus-anime针对动漫插画图像优化有更小的体积
4. realesr-animevideov3 (针对动漫视频)
你可以通过`-n`参数来使用其他模型,例如`./realesrgan-ncnn-vulkan.exe -i 二次元图片.jpg -o 二刺螈图片.png -n realesrgan-x4plus-anime`
### 可执行文件的用法
1. 更多细节可以参考 [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan#computer-usages).
2. 注意:可执行文件并没有支持 python 脚本 `inference_realesrgan.py` 中所有的功能,比如 `outscale` 选项) .
```console
Usage: realesrgan-ncnn-vulkan.exe -i infile -o outfile [options]...
-h show this help
-i input-path input image path (jpg/png/webp) or directory
-o output-path output image path (jpg/png/webp) or directory
-s scale upscale ratio (can be 2, 3, 4. default=4)
-t tile-size tile size (>=32/0=auto, default=0) can be 0,0,0 for multi-gpu
-m model-path folder path to the pre-trained models. default=models
-n model-name model name (default=realesr-animevideov3, can be realesr-animevideov3 | realesrgan-x4plus | realesrgan-x4plus-anime | realesrnet-x4plus)
-g gpu-id gpu device to use (default=auto) can be 0,1,2 for multi-gpu
-j load:proc:save thread count for load/proc/save (default=1:2:2) can be 1:2,2,2:2 for multi-gpu
-x enable tta mode"
-f format output image format (jpg/png/webp, default=ext/png)
-v verbose output
```
由于这些exe文件会把图像分成几个板块然后来分别进行处理再合成导出输出的图像可能会有一点割裂感而且可能跟PyTorch的输出不太一样
---
## :wrench: 依赖以及安装
- Python >= 3.7 (推荐使用[Anaconda](https://www.anaconda.com/download/#linux)或[Miniconda](https://docs.conda.io/en/latest/miniconda.html))
- [PyTorch >= 1.7](https://pytorch.org/)
#### 安装
1. 把项目克隆到本地
```bash
git clone https://github.com/xinntao/Real-ESRGAN.git
cd Real-ESRGAN
```
2. 安装各种依赖
```bash
# 安装 basicsr - https://github.com/xinntao/BasicSR
# 我们使用BasicSR来训练以及推断
pip install basicsr
# facexlib和gfpgan是用来增强人脸的
pip install facexlib
pip install gfpgan
pip install -r requirements.txt
python setup.py develop
```
## :zap: 快速上手
### 普通图片
下载我们训练好的模型: [RealESRGAN_x4plus.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth)
```bash
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P experiments/pretrained_models
```
推断!
```bash
python inference_realesrgan.py -n RealESRGAN_x4plus -i inputs --face_enhance
```
结果在`results`文件夹
### 动画图片
<p align="center">
<img src="https://raw.githubusercontent.com/xinntao/public-figures/master/Real-ESRGAN/cmp_realesrgan_anime_1.png">
</p>
训练好的模型: [RealESRGAN_x4plus_anime_6B](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth)<br>
有关[waifu2x](https://github.com/nihui/waifu2x-ncnn-vulkan)的更多信息和对比在[**anime_model.md**](docs/anime_model.md)中。
```bash
# 下载模型
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth -P experiments/pretrained_models
# 推断
python inference_realesrgan.py -n RealESRGAN_x4plus_anime_6B -i inputs
```
结果在`results`文件夹
### Python 脚本的用法
1. 虽然你使用了 X4 模型,但是你可以 **输出任意尺寸比例的图片**,只要实用了 `outscale` 参数. 程序会进一步对模型的输出图像进行缩放。
```console
Usage: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile -o outfile [options]...
A common command: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile --outscale 3.5 --face_enhance
-h show this help
-i --input Input image or folder. Default: inputs
-o --output Output folder. Default: results
-n --model_name Model name. Default: RealESRGAN_x4plus
-s, --outscale The final upsampling scale of the image. Default: 4
--suffix Suffix of the restored image. Default: out
-t, --tile Tile size, 0 for no tile during testing. Default: 0
--face_enhance Whether to use GFPGAN to enhance face. Default: False
--fp32 Whether to use half precision during inference. Default: False
--ext Image extension. Options: auto | jpg | png, auto means using the same extension as inputs. Default: auto
```
## :european_castle: 模型库
请参见 [docs/model_zoo.md](docs/model_zoo.md)
## :computer: 训练在你的数据上微调Fine-tune
这里有一份详细的指南:[Training.md](Training.md).
## BibTeX 引用
@Article{wang2021realesrgan,
title={Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data},
author={Xintao Wang and Liangbin Xie and Chao Dong and Ying Shan},
journal={arXiv:2107.10833},
year={2021}
}
## :e-mail: 联系我们
如果你有任何问题,请通过 `xintao.wang@outlook.com` 或 `xintaowang@tencent.com` 联系我们。
## :hugs: 感谢
感谢所有的贡献者大大们~
- [AK391](https://github.com/AK391): 通过[Gradio](https://github.com/gradio-app/gradio)添加到了[Huggingface Spaces](https://huggingface.co/spaces)(一个机器学习应用的在线平台):[Gradio在线版](https://huggingface.co/spaces/akhaliq/Real-ESRGAN)。
- [Asiimoviet](https://github.com/Asiimoviet): 把 README.md 文档 翻译成了中文。
- [2ji3150](https://github.com/2ji3150): 感谢详尽并且富有价值的[反馈、建议](https://github.com/xinntao/Real-ESRGAN/issues/131).

View File

@@ -1,16 +1,24 @@
# :computer: How to Train Real-ESRGAN
# :computer: How to Train/Finetune Real-ESRGAN
The training codes have been released. <br>
Note that the codes have a lot of refactoring. So there may be some bugs/performance drops. Welcome to report issues and I will also retrain the models.
- [Train Real-ESRGAN](#train-real-esrgan)
- [Overview](#overview)
- [Dataset Preparation](#dataset-preparation)
- [Train Real-ESRNet](#Train-Real-ESRNet)
- [Train Real-ESRGAN](#Train-Real-ESRGAN)
- [Finetune Real-ESRGAN on your own dataset](#Finetune-Real-ESRGAN-on-your-own-dataset)
- [Generate degraded images on the fly](#Generate-degraded-images-on-the-fly)
- [Use paired training data](#use-your-own-paired-data)
## Overview
## Train Real-ESRGAN
### Overview
The training has been divided into two stages. These two stages have the same data synthesis process and training pipeline, except for the loss functions. Specifically,
1. We first train Real-ESRNet with L1 loss from the pre-trained model ESRGAN.
1. We then use the trained Real-ESRNet model as an initialization of the generator, and train the Real-ESRGAN with a combination of L1 loss, perceptual loss and GAN loss.
## Dataset Preparation
### Dataset Preparation
We use DF2K (DIV2K and Flickr2K) + OST datasets for our training. Only HR images are required. <br>
You can download from :
@@ -19,9 +27,30 @@ You can download from :
2. Flickr2K: https://cv.snu.ac.kr/research/EDSR/Flickr2K.tar
3. OST: https://openmmlab.oss-cn-hangzhou.aliyuncs.com/datasets/OST_dataset.zip
For the DF2K dataset, we use a multi-scale strategy, *i.e.*, we downsample HR images to obtain several Ground-Truth images with different scales.
Here are steps for data preparation.
We then crop DF2K images into sub-images for faster IO and processing.
#### Step 1: [Optional] Generate multi-scale images
For the DF2K dataset, we use a multi-scale strategy, *i.e.*, we downsample HR images to obtain several Ground-Truth images with different scales. <br>
You can use the [scripts/generate_multiscale_DF2K.py](scripts/generate_multiscale_DF2K.py) script to generate multi-scale images. <br>
Note that this step can be omitted if you just want to have a fast try.
```bash
python scripts/generate_multiscale_DF2K.py --input datasets/DF2K/DF2K_HR --output datasets/DF2K/DF2K_multiscale
```
#### Step 2: [Optional] Crop to sub-images
We then crop DF2K images into sub-images for faster IO and processing.<br>
This step is optional if your IO is enough or your disk space is limited.
You can use the [scripts/extract_subimages.py](scripts/extract_subimages.py) script. Here is the example:
```bash
python scripts/extract_subimages.py --input datasets/DF2K/DF2K_multiscale --output datasets/DF2K/DF2K_multiscale_sub --crop_size 400 --step 200
```
#### Step 3: Prepare a txt for meta information
You need to prepare a txt file containing the image paths. The following are some examples in `meta_info_DF2Kmultiscale+OST_sub.txt` (As different users may have different sub-images partitions, this file is not suitable for your purpose and you need to prepare your own txt file):
@@ -32,7 +61,14 @@ DF2K_HR_sub/000001_s003.png
...
```
## Train Real-ESRNet
You can use the [scripts/generate_meta_info.py](scripts/generate_meta_info.py) script to generate the txt file. <br>
You can merge several folders into one meta_info txt. Here is the example:
```bash
python scripts/generate_meta_info.py --input datasets/DF2K/DF2K_HR, datasets/DF2K/DF2K_multiscale --root datasets/DF2K, datasets/DF2K --meta_info datasets/DF2K/meta_info/meta_info_DF2Kmultiscale.txt
```
### Train Real-ESRNet
1. Download pre-trained model [ESRGAN](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth) into `experiments/pretrained_models`.
```bash
@@ -44,7 +80,7 @@ DF2K_HR_sub/000001_s003.png
name: DF2K+OST
type: RealESRGANDataset
dataroot_gt: datasets/DF2K # modify to the root path of your folder
meta_info: data/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt # modify to your own generate meta info txt
meta_info: realesrgan/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt # modify to your own generate meta info txt
io_backend:
type: disk
```
@@ -76,25 +112,158 @@ DF2K_HR_sub/000001_s003.png
1. Before the formal training, you may run in the `--debug` mode to see whether everything is OK. We use four GPUs for training:
```bash
CUDA_VISIBLE_DEVICES=0,1,2,3 \
python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --debug
python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --debug
```
Train with **a single GPU** in the *debug* mode:
```bash
python realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --debug
```
1. The formal training. We use four GPUs for training. We use the `--auto_resume` argument to automatically resume the training if necessary.
```bash
CUDA_VISIBLE_DEVICES=0,1,2,3 \
python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --auto_resume
python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --auto_resume
```
## Train Real-ESRGAN
Train with **a single GPU**:
```bash
python realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --auto_resume
```
### Train Real-ESRGAN
1. After the training of Real-ESRNet, you now have the file `experiments/train_RealESRNetx4plus_1000k_B12G4_fromESRGAN/model/net_g_1000000.pth`. If you need to specify the pre-trained path to other files, modify the `pretrain_network_g` value in the option file `train_realesrgan_x4plus.yml`.
1. Modify the option file `train_realesrgan_x4plus.yml` accordingly. Most modifications are similar to those listed above.
1. Before the formal training, you may run in the `--debug` mode to see whether everything is OK. We use four GPUs for training:
```bash
CUDA_VISIBLE_DEVICES=0,1,2,3 \
python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --debug
python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --debug
```
Train with **a single GPU** in the *debug* mode:
```bash
python realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --debug
```
1. The formal training. We use four GPUs for training. We use the `--auto_resume` argument to automatically resume the training if necessary.
```bash
CUDA_VISIBLE_DEVICES=0,1,2,3 \
python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --auto_resume
python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --auto_resume
```
Train with **a single GPU**:
```bash
python realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --auto_resume
```
## Finetune Real-ESRGAN on your own dataset
You can finetune Real-ESRGAN on your own dataset. Typically, the fine-tuning process can be divided into two cases:
1. [Generate degraded images on the fly](#Generate-degraded-images-on-the-fly)
1. [Use your own **paired** data](#Use-paired-training-data)
### Generate degraded images on the fly
Only high-resolution images are required. The low-quality images are generated with the degradation process described in Real-ESRGAN during trainig.
**1. Prepare dataset**
See [this section](#dataset-preparation) for more details.
**2. Download pre-trained models**
Download pre-trained models into `experiments/pretrained_models`.
- *RealESRGAN_x4plus.pth*:
```bash
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P experiments/pretrained_models
```
- *RealESRGAN_x4plus_netD.pth*:
```bash
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x4plus_netD.pth -P experiments/pretrained_models
```
**3. Finetune**
Modify [options/finetune_realesrgan_x4plus.yml](options/finetune_realesrgan_x4plus.yml) accordingly, especially the `datasets` part:
```yml
train:
name: DF2K+OST
type: RealESRGANDataset
dataroot_gt: datasets/DF2K # modify to the root path of your folder
meta_info: realesrgan/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt # modify to your own generate meta info txt
io_backend:
type: disk
```
We use four GPUs for training. We use the `--auto_resume` argument to automatically resume the training if necessary.
```bash
CUDA_VISIBLE_DEVICES=0,1,2,3 \
python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/finetune_realesrgan_x4plus.yml --launcher pytorch --auto_resume
```
Finetune with **a single GPU**:
```bash
python realesrgan/train.py -opt options/finetune_realesrgan_x4plus.yml --auto_resume
```
### Use your own paired data
You can also finetune RealESRGAN with your own paired data. It is more similar to fine-tuning ESRGAN.
**1. Prepare dataset**
Assume that you already have two folders:
- **gt folder** (Ground-truth, high-resolution images): *datasets/DF2K/DIV2K_train_HR_sub*
- **lq folder** (Low quality, low-resolution images): *datasets/DF2K/DIV2K_train_LR_bicubic_X4_sub*
Then, you can prepare the meta_info txt file using the script [scripts/generate_meta_info_pairdata.py](scripts/generate_meta_info_pairdata.py):
```bash
python scripts/generate_meta_info_pairdata.py --input datasets/DF2K/DIV2K_train_HR_sub datasets/DF2K/DIV2K_train_LR_bicubic_X4_sub --meta_info datasets/DF2K/meta_info/meta_info_DIV2K_sub_pair.txt
```
**2. Download pre-trained models**
Download pre-trained models into `experiments/pretrained_models`.
- *RealESRGAN_x4plus.pth*
```bash
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P experiments/pretrained_models
```
- *RealESRGAN_x4plus_netD.pth*
```bash
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x4plus_netD.pth -P experiments/pretrained_models
```
**3. Finetune**
Modify [options/finetune_realesrgan_x4plus_pairdata.yml](options/finetune_realesrgan_x4plus_pairdata.yml) accordingly, especially the `datasets` part:
```yml
train:
name: DIV2K
type: RealESRGANPairedDataset
dataroot_gt: datasets/DF2K # modify to the root path of your folder
dataroot_lq: datasets/DF2K # modify to the root path of your folder
meta_info: datasets/DF2K/meta_info/meta_info_DIV2K_sub_pair.txt # modify to the root path of your folder
io_backend:
type: disk
```
We use four GPUs for training. We use the `--auto_resume` argument to automatically resume the training if necessary.
```bash
CUDA_VISIBLE_DEVICES=0,1,2,3 \
python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/finetune_realesrgan_x4plus_pairdata.yml --launcher pytorch --auto_resume
```
Finetune with **a single GPU**:
```bash
python realesrgan/train.py -opt options/finetune_realesrgan_x4plus_pairdata.yml --auto_resume
```

1
VERSION Normal file
View File

@@ -0,0 +1 @@
0.2.5.0

BIN
assets/realesrgan_logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

BIN
assets/teaser-text.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 546 KiB

65
docs/anime_comparisons.md Normal file
View File

@@ -0,0 +1,65 @@
# Comparisons among different anime models
## Update News
- 2022/04/24: Release **AnimeVideo-v3**. We have made the following improvements:
- **better naturalness**
- **Fewer artifacts**
- **more faithful to the original colors**
- **better texture restoration**
- **better background restoration**
## Comparisons
We have compared our RealESRGAN-AnimeVideo-v3 with the following methods.
Our RealESRGAN-AnimeVideo-v3 can achieve better results with faster inference speed.
- [waifu2x](https://github.com/nihui/waifu2x-ncnn-vulkan) with the hyperparameters: `tile=0`, `noiselevel=2`
- [Real-CUGAN](https://github.com/bilibili/ailab/tree/main/Real-CUGAN):
we use the [20220227](https://github.com/bilibili/ailab/releases/tag/Real-CUGAN-add-faster-low-memory-mode) version, the hyperparameters are: `cache_mode=0`, `tile=0`, `alpha=1`.
- our RealESRGAN-AnimeVideo-v3
## Results
You may need to **zoom in** for comparing details, or **click the image** to see in the full size.
**More natural results, better background restoration**
| Input | waifu2x | Real-CUGAN | RealESRGAN<br>AnimeVideo-v3 |
| :---: | :---: | :---: | :---: |
|![157083983-bec52c67-9a5e-4eed-afef-01fe6cd2af85_patch](https://user-images.githubusercontent.com/11482921/164452769-5d8cb4f8-1708-42d2-b941-f44a6f136feb.png) | ![](https://user-images.githubusercontent.com/11482921/164452767-c825cdec-f721-4ff1-aef1-fec41f146c4c.png) | ![](https://user-images.githubusercontent.com/11482921/164452755-3be50895-e3d4-432d-a7b9-9085c2a8e771.png) | ![](https://user-images.githubusercontent.com/11482921/164452771-be300656-379a-4323-a755-df8025a8c451.png) |
|![a0010_patch](https://user-images.githubusercontent.com/11482921/164454047-22eeb493-3fa9-4142-9fc2-6f2a1c074cd5.png) | ![](https://user-images.githubusercontent.com/11482921/164454046-d5e79f8f-00a0-4b55-bc39-295d0d69747a.png) | ![](https://user-images.githubusercontent.com/11482921/164454040-87886b11-9d08-48bd-862f-0d4aed72eb19.png) | ![](https://user-images.githubusercontent.com/11482921/164454055-73dc9f02-286e-4d5c-8f70-c13742e08f42.png) |
|![00000044_patch](https://user-images.githubusercontent.com/11482921/164451232-bacf64fc-e55a-44db-afbb-6b31ab0f8973.png) | ![](https://user-images.githubusercontent.com/11482921/164451318-f309b61a-75b8-4b74-b5f3-595725f1cf0b.png) | ![](https://user-images.githubusercontent.com/11482921/164451348-994f8a35-adbe-4a4b-9c61-feaa294af06a.png) | ![](https://user-images.githubusercontent.com/11482921/164451361-9b7d376e-6f75-4648-b752-542b44845d1c.png) |
**Fewer artifacts, better detailed textures**
| Input | waifu2x | Real-CUGAN | RealESRGAN<br>AnimeVideo-v3 |
| :---: | :---: | :---: | :---: |
|![00000053_patch](https://user-images.githubusercontent.com/11482921/164448411-148a7e5c-cfcd-4504-8bc7-e318eb883bb6.png) | ![](https://user-images.githubusercontent.com/11482921/164448633-dfc15224-b6d2-4403-a3c9-4bb819979364.png) | ![](https://user-images.githubusercontent.com/11482921/164448771-0d359509-5293-4d4c-8e3c-86a2a314ea88.png) | ![](https://user-images.githubusercontent.com/11482921/164448848-1a4ff99e-075b-4458-9db7-2c89e8160aa0.png) |
|![Disney_v4_22_018514_s2_patch](https://user-images.githubusercontent.com/11482921/164451898-83311cdf-bd3e-450f-b9f6-34d7fea3ab79.png) | ![](https://user-images.githubusercontent.com/11482921/164451894-6c56521c-6561-40d6-a3a5-8dde2c167b8a.png) | ![](https://user-images.githubusercontent.com/11482921/164451888-af9b47e3-39dc-4f3e-b0d7-d372d8191e2a.png) | ![](https://user-images.githubusercontent.com/11482921/164451901-31ca4dd4-9847-4baa-8cde-ad50f4053dcf.png) |
|![Japan_v2_0_007261_s2_patch](https://user-images.githubusercontent.com/11482921/164454578-73c77392-77de-49c5-b03c-c36631723192.png) | ![](https://user-images.githubusercontent.com/11482921/164454574-b1ede5f0-4520-4eaa-8f59-086751a34e62.png) | ![](https://user-images.githubusercontent.com/11482921/164454567-4cb3fdd8-6a2d-4016-85b2-a305a8ff80e4.png) | ![](https://user-images.githubusercontent.com/11482921/164454583-7f243f20-eca3-4500-ac43-eb058a4a101a.png) |
|![huluxiongdi_2_patch](https://user-images.githubusercontent.com/11482921/164453482-0726c842-337e-40ec-bf6c-f902ee956a8b.png) | ![](https://user-images.githubusercontent.com/11482921/164453480-71d5e091-5bfa-4c77-9c57-4e37f66ca0a3.png) | ![](https://user-images.githubusercontent.com/11482921/164453468-c295d3c9-3661-45f0-9ecd-406a1877f76e.png) | ![](https://user-images.githubusercontent.com/11482921/164453486-3091887c-587c-450e-b6fe-905cb518d57e.png) |
**Other better results**
| Input | waifu2x | Real-CUGAN | RealESRGAN<br>AnimeVideo-v3 |
| :---: | :---: | :---: | :---: |
|![Japan_v2_1_128525_s1_patch](https://user-images.githubusercontent.com/11482921/164454933-67697f7c-b6ef-47dc-bfca-822a78af8acf.png) | ![](https://user-images.githubusercontent.com/11482921/164454931-9450de7c-f0b3-4638-9c1e-0668e0c41ef0.png) | ![](https://user-images.githubusercontent.com/11482921/164454926-ed746976-786d-41c5-8a83-7693cd774c3a.png) | ![](https://user-images.githubusercontent.com/11482921/164454936-8abdf0f0-fb30-40eb-8281-3b46c0bcb9ae.png) |
|![tianshuqitan_2_patch](https://user-images.githubusercontent.com/11482921/164456948-807c1476-90b6-4507-81da-cb986d01600c.png) | ![](https://user-images.githubusercontent.com/11482921/164456943-25e89de9-d7e5-4f61-a2e1-96786af6ae9e.png) | ![](https://user-images.githubusercontent.com/11482921/164456954-b468c447-59f5-4594-9693-3683e44ba3e6.png) | ![](https://user-images.githubusercontent.com/11482921/164456957-640f910c-3b04-407c-ac20-044d72e19735.png) |
|![00000051_patch](https://user-images.githubusercontent.com/11482921/164456044-e9a6b3fa-b24e-4eb7-acf9-1f7746551b1e.png) ![00000051_patch](https://user-images.githubusercontent.com/11482921/164456421-b67245b0-767d-4250-9105-80bbe507ecfc.png) | ![](https://user-images.githubusercontent.com/11482921/164456040-85763cf2-cb28-4ba3-abb6-1dbb48c55713.png) ![](https://user-images.githubusercontent.com/11482921/164456419-59cf342e-bc1e-4044-868c-e1090abad313.png) | ![](https://user-images.githubusercontent.com/11482921/164456031-4244bb7b-8649-4e01-86f4-40c2099c5afd.png) ![](https://user-images.githubusercontent.com/11482921/164456411-b6afcbe9-c054-448d-a6df-96d3ba3047f8.png) | ![](https://user-images.githubusercontent.com/11482921/164456035-12e270be-fd52-46d4-b18a-3d3b680731fe.png) ![](https://user-images.githubusercontent.com/11482921/164456417-dcaa8b62-f497-427d-b2d2-f390f1200fb9.png) |
|![00000099_patch](https://user-images.githubusercontent.com/11482921/164455312-6411b6e1-5823-4131-a4b0-a6be8a9ae89f.png) | ![](https://user-images.githubusercontent.com/11482921/164455310-f2b99646-3a22-47a4-805b-dc451ac86ddb.png) | ![](https://user-images.githubusercontent.com/11482921/164455294-35471b42-2826-4451-b7ec-6de01344954c.png) | ![](https://user-images.githubusercontent.com/11482921/164455305-fa4c9758-564a-4081-8b4e-f11057a0404d.png) |
|![00000016_patch](https://user-images.githubusercontent.com/11482921/164455672-447353c9-2da2-4fcb-ba4a-7dd6b94c19c1.png) | ![](https://user-images.githubusercontent.com/11482921/164455669-df384631-baaa-42f8-9150-40f658471558.png) | ![](https://user-images.githubusercontent.com/11482921/164455657-68006bf0-138d-4981-aaca-8aa927d2f78a.png) | ![](https://user-images.githubusercontent.com/11482921/164455664-0342b93e-a62a-4b36-a90e-7118f3f1e45d.png) |
## Inference Speed
### PyTorch
Note that we only report the **model** time, and ignore the IO time.
| GPU | Input Resolution | waifu2x | Real-CUGAN | RealESRGAN-AnimeVideo-v3
| :---: | :---: | :---: | :---: | :---: |
| V100 | 1921 x 1080 | - | 3.4 fps | **10.0** fps |
| V100 | 1280 x 720 | - | 7.2 fps | **22.6** fps |
| V100 | 640 x 480 | - | 24.4 fps | **65.9** fps |
### ncnn
- [ ] TODO

69
docs/anime_model.md Normal file
View File

@@ -0,0 +1,69 @@
# Anime Model
:white_check_mark: We add [*RealESRGAN_x4plus_anime_6B.pth*](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth), which is optimized for **anime** images with much smaller model size.
- [Anime Model](#anime-model)
- [How to Use](#how-to-use)
- [PyTorch Inference](#pytorch-inference)
- [ncnn Executable File](#ncnn-executable-file)
- [Comparisons with waifu2x](#comparisons-with-waifu2x)
- [Comparisons with Sliding Bars](#comparisons-with-sliding-bars)
<p align="center">
<img src="https://raw.githubusercontent.com/xinntao/public-figures/master/Real-ESRGAN/cmp_realesrgan_anime_1.png">
</p>
The following is a video comparison with sliding bar. You may need to use the full-screen mode for better visual quality, as the original image is large; otherwise, you may encounter aliasing issue.
<https://user-images.githubusercontent.com/17445847/131535127-613250d4-f754-4e20-9720-2f9608ad0675.mp4>
## How to Use
### PyTorch Inference
Pre-trained models: [RealESRGAN_x4plus_anime_6B](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth)
```bash
# download model
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth -P experiments/pretrained_models
# inference
python inference_realesrgan.py -n RealESRGAN_x4plus_anime_6B -i inputs
```
### ncnn Executable File
Download the latest portable [Windows](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-windows.zip) / [Linux](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-ubuntu.zip) / [MacOS](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-macos.zip) **executable files for Intel/AMD/Nvidia GPU**.
Taking the Windows as example, run:
```bash
./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n realesrgan-x4plus-anime
```
## Comparisons with waifu2x
We compare Real-ESRGAN-anime with [waifu2x](https://github.com/nihui/waifu2x-ncnn-vulkan). We use the `-n 2 -s 4` for waifu2x.
<p align="center">
<img src="https://raw.githubusercontent.com/xinntao/public-figures/master/Real-ESRGAN/cmp_realesrgan_anime_1.png">
</p>
<p align="center">
<img src="https://raw.githubusercontent.com/xinntao/public-figures/master/Real-ESRGAN/cmp_realesrgan_anime_2.png">
</p>
<p align="center">
<img src="https://raw.githubusercontent.com/xinntao/public-figures/master/Real-ESRGAN/cmp_realesrgan_anime_3.png">
</p>
<p align="center">
<img src="https://raw.githubusercontent.com/xinntao/public-figures/master/Real-ESRGAN/cmp_realesrgan_anime_4.png">
</p>
<p align="center">
<img src="https://raw.githubusercontent.com/xinntao/public-figures/master/Real-ESRGAN/cmp_realesrgan_anime_5.png">
</p>
## Comparisons with Sliding Bars
The following are video comparisons with sliding bar. You may need to use the full-screen mode for better visual quality, as the original image is large; otherwise, you may encounter aliasing issue.
<https://user-images.githubusercontent.com/17445847/131536647-a2fbf896-b495-4a9f-b1dd-ca7bbc90101a.mp4>
<https://user-images.githubusercontent.com/17445847/131536742-6d9d82b6-9765-4296-a15f-18f9aeaa5465.mp4>

123
docs/anime_video_model.md Normal file
View File

@@ -0,0 +1,123 @@
# Anime Video Models
:white_check_mark: We add small models that are optimized for anime videos :-)<br>
More comparisons can be found in [anime_comparisons.md](docs/anime_comparisons.md)
- [How to Use](#how-to-use)
- [PyTorch Inference](#pytorch-inference)
- [ncnn Executable File](#ncnn-executable-file)
- [Step 1: Use ffmpeg to extract frames from video](#step-1-use-ffmpeg-to-extract-frames-from-video)
- [Step 2: Inference with Real-ESRGAN executable file](#step-2-inference-with-real-esrgan-executable-file)
- [Step 3: Merge the enhanced frames back into a video](#step-3-merge-the-enhanced-frames-back-into-a-video)
- [More Demos](#more-demos)
| Models | Scale | Description |
| ---------------------------------------------------------------------------------------------------------------------------------- | :---- | :----------------------------- |
| [realesr-animevideov3](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth) | X4 <sup>1</sup> | Anime video model with XS size |
Note: <br>
<sup>1</sup> This model can also be used for X1, X2, X3.
---
The following are some demos (best view in the full screen mode).
<https://user-images.githubusercontent.com/17445847/145706977-98bc64a4-af27-481c-8abe-c475e15db7ff.MP4>
<https://user-images.githubusercontent.com/17445847/145707055-6a4b79cb-3d9d-477f-8610-c6be43797133.MP4>
<https://user-images.githubusercontent.com/17445847/145783523-f4553729-9f03-44a8-a7cc-782aadf67b50.MP4>
## How to Use
### PyTorch Inference
```bash
# download model
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth -P realesrgan/weights
# inference
python inference_realesrgan_video.py -i inputs/video/onepiece_demo.mp4 -n realesr-animevideov3 -s 2 --suffix outx2
```
### NCNN Executable File
#### Step 1: Use ffmpeg to extract frames from video
```bash
ffmpeg -i onepiece_demo.mp4 -qscale:v 1 -qmin 1 -qmax 1 -vsync 0 tmp_frames/frame%08d.png
```
- Remember to create the folder `tmp_frames` ahead
#### Step 2: Inference with Real-ESRGAN executable file
1. Download the latest portable [Windows](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-windows.zip) / [Linux](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-ubuntu.zip) / [MacOS](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-macos.zip) **executable files for Intel/AMD/Nvidia GPU**
1. Taking the Windows as example, run:
```bash
./realesrgan-ncnn-vulkan.exe -i tmp_frames -o out_frames -n realesr-animevideov3 -s 2 -f jpg
```
- Remember to create the folder `out_frames` ahead
#### Step 3: Merge the enhanced frames back into a video
1. First obtain fps from input videos by
```bash
ffmpeg -i onepiece_demo.mp4
```
```console
Usage:
-i input video path
```
You will get the output similar to the following screenshot.
<p align="center">
<img src="https://user-images.githubusercontent.com/17445847/145710145-c4f3accf-b82f-4307-9f20-3803a2c73f57.png">
</p>
2. Merge frames
```bash
ffmpeg -r 23.98 -i out_frames/frame%08d.jpg -c:v libx264 -r 23.98 -pix_fmt yuv420p output.mp4
```
```console
Usage:
-i input video path
-c:v video encoder (usually we use libx264)
-r fps, remember to modify it to meet your needs
-pix_fmt pixel format in video
```
If you also want to copy audio from the input videos, run:
```bash
ffmpeg -r 23.98 -i out_frames/frame%08d.jpg -i onepiece_demo.mp4 -map 0:v:0 -map 1:a:0 -c:a copy -c:v libx264 -r 23.98 -pix_fmt yuv420p output_w_audio.mp4
```
```console
Usage:
-i input video path, here we use two input streams
-c:v video encoder (usually we use libx264)
-r fps, remember to modify it to meet your needs
-pix_fmt pixel format in video
```
## More Demos
- Input video for One Piece:
<https://user-images.githubusercontent.com/17445847/145706822-0e83d9c4-78ef-40ee-b2a4-d8b8c3692d17.mp4>
- Out video for One Piece
<https://user-images.githubusercontent.com/17445847/164960481-759658cf-fcb8-480c-b888-cecb606e8744.mp4>
**More comparisons**
<https://user-images.githubusercontent.com/17445847/145707458-04a5e9b9-2edd-4d1f-b400-380a72e5f5e6.MP4>

48
docs/model_zoo.md Normal file
View File

@@ -0,0 +1,48 @@
# :european_castle: Model Zoo
- [For General Images](#for-general-images)
- [For Anime Images](#for-anime-images)
- [For Anime Videos](#for-anime-videos)
---
## For General Images
| Models | Scale | Description |
| ------------------------------------------------------------------------------------------------------------------------------- | :---- | :------------------------------------------- |
| [RealESRGAN_x4plus](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth) | X4 | X4 model for general images |
| [RealESRGAN_x2plus](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth) | X2 | X2 model for general images |
| [RealESRNet_x4plus](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRNet_x4plus.pth) | X4 | X4 model with MSE loss (over-smooth effects) |
| [official ESRGAN_x4](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth) | X4 | official ESRGAN model |
The following models are **discriminators**, which are usually used for fine-tuning.
| Models | Corresponding model |
| ---------------------------------------------------------------------------------------------------------------------- | :------------------ |
| [RealESRGAN_x4plus_netD](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x4plus_netD.pth) | RealESRGAN_x4plus |
| [RealESRGAN_x2plus_netD](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x2plus_netD.pth) | RealESRGAN_x2plus |
## For Anime Images / Illustrations
| Models | Scale | Description |
| ------------------------------------------------------------------------------------------------------------------------------ | :---- | :---------------------------------------------------------- |
| [RealESRGAN_x4plus_anime_6B](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth) | X4 | Optimized for anime images; 6 RRDB blocks (smaller network) |
The following models are **discriminators**, which are usually used for fine-tuning.
| Models | Corresponding model |
| ---------------------------------------------------------------------------------------------------------------------------------------- | :------------------------- |
| [RealESRGAN_x4plus_anime_6B_netD](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B_netD.pth) | RealESRGAN_x4plus_anime_6B |
## For Animation Videos
| Models | Scale | Description |
| ---------------------------------------------------------------------------------------------------------------------------------- | :---- | :----------------------------- |
| [realesr-animevideov3](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth) | X4<sup>1</sup> | Anime video model with XS size |
Note: <br>
<sup>1</sup> This model can also be used for X1, X2, X3.
The following models are **discriminators**, which are usually used for fine-tuning.
TODO

11
docs/ncnn_conversion.md Normal file
View File

@@ -0,0 +1,11 @@
# Instructions on converting to NCNN models
1. Convert to onnx model with `scripts/pytorch2onnx.py`. Remember to modify codes accordingly
1. Convert onnx model to ncnn model
1. `cd ncnn-master\ncnn\build\tools\onnx`
1. `onnx2ncnn.exe realesrgan-x4.onnx realesrgan-x4-raw.param realesrgan-x4-raw.bin`
1. Optimize ncnn model
1. fp16 mode
1. `cd ncnn-master\ncnn\build\tools`
1. `ncnnoptimize.exe realesrgan-x4-raw.param realesrgan-x4-raw.bin realesrgan-x4.param realesrgan-x4.bin 1`
1. Modify the blob name in `realesrgan-x4.param`: `data` and `output`

11
feedback.md Normal file
View File

@@ -0,0 +1,11 @@
# Feedback 反馈
## 动漫插画模型
1. 视频处理不了: 目前的模型,不是针对视频的,所以视频效果很很不好。我们在探究针对视频的模型了
1. 景深虚化有问题: 现在的模型把一些景深 和 特意的虚化 都复原了,感觉不好。这个后面我们会考虑把这个信息结合进入。一个简单的做法是识别景深和虚化,然后作为条件告诉神经网络,哪些地方复原强一些,哪些地方复原要弱一些
1. 不可以调节: 像 Waifu2X 可以调节。可以根据自己的喜好,做调整,但是 Real-ESRGAN-anime 并不可以。导致有些恢复效果过了
1. 把原来的风格改变了: 不同的动漫插画都有自己的风格,现在的 Real-ESRGAN-anime 倾向于恢复成一种风格(这是受到训练数据集影响的)。风格是动漫很重要的一个要素,所以要尽可能保持
1. 模型太大: 目前的模型处理太慢,能够更快。这个我们有相关的工作在探究,希望能够尽快有结果,并应用到 Real-ESRGAN 这一系列的模型上
Thanks for the [detailed and valuable feedbacks/suggestions](https://github.com/xinntao/Real-ESRGAN/issues/131) by [2ji3150](https://github.com/2ji3150).

View File

@@ -1,27 +1,34 @@
import argparse
import cv2
import glob
import math
import numpy as np
import os
import torch
from basicsr.archs.rrdbnet_arch import RRDBNet
from torch.nn import functional as F
from realesrgan import RealESRGANer
from realesrgan.archs.srvgg_arch import SRVGGNetCompact
def main():
"""Inference demo for Real-ESRGAN.
"""
parser = argparse.ArgumentParser()
parser.add_argument('--input', type=str, default='inputs', help='Input image or folder')
parser.add_argument('-i', '--input', type=str, default='inputs', help='Input image or folder')
parser.add_argument(
'--model_path',
'-n',
'--model_name',
type=str,
default='experiments/pretrained_models/RealESRGAN_x4plus.pth',
help='Path to the pre-trained model')
parser.add_argument('--scale', type=int, default=4, help='Upsample scale factor')
default='RealESRGAN_x4plus',
help=('Model names: RealESRGAN_x4plus | RealESRNet_x4plus | RealESRGAN_x4plus_anime_6B | RealESRGAN_x2plus | '
'realesr-animevideov3'))
parser.add_argument('-o', '--output', type=str, default='results', help='Output folder')
parser.add_argument('-s', '--outscale', type=float, default=4, help='The final upsampling scale of the image')
parser.add_argument('--suffix', type=str, default='out', help='Suffix of the restored image')
parser.add_argument('--tile', type=int, default=0, help='Tile size, 0 for no tile during testing')
parser.add_argument('-t', '--tile', type=int, default=0, help='Tile size, 0 for no tile during testing')
parser.add_argument('--tile_pad', type=int, default=10, help='Tile padding')
parser.add_argument('--pre_pad', type=int, default=0, help='Pre padding size at each border')
parser.add_argument('--face_enhance', action='store_true', help='Use GFPGAN to enhance face')
parser.add_argument(
'--fp32', action='store_true', help='Use fp32 precision during inference. Default: fp16 (half precision).')
parser.add_argument(
'--alpha_upsampler',
type=str,
@@ -34,9 +41,48 @@ def main():
help='Image extension. Options: auto | jpg | png, auto means using the same extension as inputs')
args = parser.parse_args()
# determine models according to model names
args.model_name = args.model_name.split('.')[0]
if args.model_name in ['RealESRGAN_x4plus', 'RealESRNet_x4plus']: # x4 RRDBNet model
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4)
netscale = 4
elif args.model_name in ['RealESRGAN_x4plus_anime_6B']: # x4 RRDBNet model with 6 blocks
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4)
netscale = 4
elif args.model_name in ['RealESRGAN_x2plus']: # x2 RRDBNet model
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=2)
netscale = 2
elif args.model_name in ['realesr-animevideov3']: # x4 VGG-style model (XS size)
model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu')
netscale = 4
# determine model paths
model_path = os.path.join('experiments/pretrained_models', args.model_name + '.pth')
if not os.path.isfile(model_path):
model_path = os.path.join('realesrgan/weights', args.model_name + '.pth')
if not os.path.isfile(model_path):
raise ValueError(f'Model {args.model_name} does not exist.')
# restorer
upsampler = RealESRGANer(
scale=args.scale, model_path=args.model_path, tile=args.tile, tile_pad=args.tile_pad, pre_pad=args.pre_pad)
os.makedirs('results/', exist_ok=True)
scale=netscale,
model_path=model_path,
model=model,
tile=args.tile,
tile_pad=args.tile_pad,
pre_pad=args.pre_pad,
half=not args.fp32)
if args.face_enhance: # Use GFPGAN for face enhancement
from gfpgan import GFPGANer
face_enhancer = GFPGANer(
model_path='https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth',
upscale=args.outscale,
arch='clean',
channel_multiplier=2,
bg_upsampler=upsampler)
os.makedirs(args.output, exist_ok=True)
if os.path.isfile(args.input):
paths = [args.input]
else:
@@ -46,197 +92,32 @@ def main():
imgname, extension = os.path.splitext(os.path.basename(path))
print('Testing', idx, imgname)
# ------------------------------ read image ------------------------------ #
img = cv2.imread(path, cv2.IMREAD_UNCHANGED).astype(np.float32)
if np.max(img) > 255: # 16-bit image
max_range = 65535
print('\tInput is a 16-bit image')
else:
max_range = 255
img = img / max_range
if len(img.shape) == 2: # gray image
img_mode = 'L'
img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB)
elif img.shape[2] == 4: # RGBA image with alpha channel
img = cv2.imread(path, cv2.IMREAD_UNCHANGED)
if len(img.shape) == 3 and img.shape[2] == 4:
img_mode = 'RGBA'
alpha = img[:, :, 3]
img = img[:, :, 0:3]
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
if args.alpha_upsampler == 'realesrgan':
alpha = cv2.cvtColor(alpha, cv2.COLOR_GRAY2RGB)
else:
img_mode = 'RGB'
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img_mode = None
# ------------------- process image (without the alpha channel) ------------------- #
upsampler.pre_process(img)
if args.tile:
upsampler.tile_process()
else:
upsampler.process()
output_img = upsampler.post_process()
output_img = output_img.data.squeeze().float().cpu().clamp_(0, 1).numpy()
output_img = np.transpose(output_img[[2, 1, 0], :, :], (1, 2, 0))
if img_mode == 'L':
output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2GRAY)
# ------------------- process the alpha channel if necessary ------------------- #
if img_mode == 'RGBA':
if args.alpha_upsampler == 'realesrgan':
upsampler.pre_process(alpha)
if args.tile:
upsampler.tile_process()
else:
upsampler.process()
output_alpha = upsampler.post_process()
output_alpha = output_alpha.data.squeeze().float().cpu().clamp_(0, 1).numpy()
output_alpha = np.transpose(output_alpha[[2, 1, 0], :, :], (1, 2, 0))
output_alpha = cv2.cvtColor(output_alpha, cv2.COLOR_BGR2GRAY)
else:
h, w = alpha.shape[0:2]
output_alpha = cv2.resize(alpha, (w * args.scale, h * args.scale), interpolation=cv2.INTER_LINEAR)
# merge the alpha channel
output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2BGRA)
output_img[:, :, 3] = output_alpha
# ------------------------------ save image ------------------------------ #
if args.ext == 'auto':
extension = extension[1:]
else:
extension = args.ext
if img_mode == 'RGBA': # RGBA images should be saved in png format
extension = 'png'
save_path = f'results/{imgname}_{args.suffix}.{extension}'
if max_range == 65535: # 16-bit image
output = (output_img * 65535.0).round().astype(np.uint16)
else:
output = (output_img * 255.0).round().astype(np.uint8)
cv2.imwrite(save_path, output)
class RealESRGANer():
def __init__(self, scale, model_path, tile=0, tile_pad=10, pre_pad=10):
self.scale = scale
self.tile_size = tile
self.tile_pad = tile_pad
self.pre_pad = pre_pad
self.mod_scale = None
# initialize model
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=scale)
loadnet = torch.load(model_path)
if 'params_ema' in loadnet:
keyname = 'params_ema'
else:
keyname = 'params'
model.load_state_dict(loadnet[keyname], strict=True)
model.eval()
self.model = model.to(self.device)
def pre_process(self, img):
img = torch.from_numpy(np.transpose(img, (2, 0, 1))).float()
self.img = img.unsqueeze(0).to(self.device)
# pre_pad
if self.pre_pad != 0:
self.img = F.pad(self.img, (0, self.pre_pad, 0, self.pre_pad), 'reflect')
# mod pad
if self.scale == 2:
self.mod_scale = 2
elif self.scale == 1:
self.mod_scale = 4
if self.mod_scale is not None:
self.mod_pad_h, self.mod_pad_w = 0, 0
_, _, h, w = self.img.size()
if (h % self.mod_scale != 0):
self.mod_pad_h = (self.mod_scale - h % self.mod_scale)
if (w % self.mod_scale != 0):
self.mod_pad_w = (self.mod_scale - w % self.mod_scale)
self.img = F.pad(self.img, (0, self.mod_pad_w, 0, self.mod_pad_h), 'reflect')
def process(self):
try:
# inference
with torch.no_grad():
self.output = self.model(self.img)
except Exception as error:
if args.face_enhance:
_, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True)
else:
output, _ = upsampler.enhance(img, outscale=args.outscale)
except RuntimeError as error:
print('Error', error)
def tile_process(self):
"""Modified from: https://github.com/ata4/esrgan-launcher
"""
batch, channel, height, width = self.img.shape
output_height = height * self.scale
output_width = width * self.scale
output_shape = (batch, channel, output_height, output_width)
# start with black image
self.output = self.img.new_zeros(output_shape)
tiles_x = math.ceil(width / self.tile_size)
tiles_y = math.ceil(height / self.tile_size)
# loop over all tiles
for y in range(tiles_y):
for x in range(tiles_x):
# extract tile from input image
ofs_x = x * self.tile_size
ofs_y = y * self.tile_size
# input tile area on total image
input_start_x = ofs_x
input_end_x = min(ofs_x + self.tile_size, width)
input_start_y = ofs_y
input_end_y = min(ofs_y + self.tile_size, height)
# input tile area on total image with padding
input_start_x_pad = max(input_start_x - self.tile_pad, 0)
input_end_x_pad = min(input_end_x + self.tile_pad, width)
input_start_y_pad = max(input_start_y - self.tile_pad, 0)
input_end_y_pad = min(input_end_y + self.tile_pad, height)
# input tile dimensions
input_tile_width = input_end_x - input_start_x
input_tile_height = input_end_y - input_start_y
tile_idx = y * tiles_x + x + 1
input_tile = self.img[:, :, input_start_y_pad:input_end_y_pad, input_start_x_pad:input_end_x_pad]
# upscale tile
try:
with torch.no_grad():
output_tile = self.model(input_tile)
except Exception as error:
print('Error', error)
print(f'\tTile {tile_idx}/{tiles_x * tiles_y}')
# output tile area on total image
output_start_x = input_start_x * self.scale
output_end_x = input_end_x * self.scale
output_start_y = input_start_y * self.scale
output_end_y = input_end_y * self.scale
# output tile area without padding
output_start_x_tile = (input_start_x - input_start_x_pad) * self.scale
output_end_x_tile = output_start_x_tile + input_tile_width * self.scale
output_start_y_tile = (input_start_y - input_start_y_pad) * self.scale
output_end_y_tile = output_start_y_tile + input_tile_height * self.scale
# put tile into output image
self.output[:, :, output_start_y:output_end_y,
output_start_x:output_end_x] = output_tile[:, :, output_start_y_tile:output_end_y_tile,
output_start_x_tile:output_end_x_tile]
def post_process(self):
# remove extra pad
if self.mod_scale is not None:
_, _, h, w = self.output.size()
self.output = self.output[:, :, 0:h - self.mod_pad_h * self.scale, 0:w - self.mod_pad_w * self.scale]
# remove prepad
if self.pre_pad != 0:
_, _, h, w = self.output.size()
self.output = self.output[:, :, 0:h - self.pre_pad * self.scale, 0:w - self.pre_pad * self.scale]
return self.output
print('If you encounter CUDA out of memory, try to set --tile with a smaller number.')
else:
if args.ext == 'auto':
extension = extension[1:]
else:
extension = args.ext
if img_mode == 'RGBA': # RGBA images should be saved in png format
extension = 'png'
if args.suffix == '':
save_path = os.path.join(args.output, f'{imgname}.{extension}')
else:
save_path = os.path.join(args.output, f'{imgname}_{args.suffix}.{extension}')
cv2.imwrite(save_path, output)
if __name__ == '__main__':

View File

@@ -0,0 +1,185 @@
import argparse
import glob
import mimetypes
import os
import queue
import shutil
import torch
from basicsr.archs.rrdbnet_arch import RRDBNet
from basicsr.utils.logger import AvgTimer
from tqdm import tqdm
from realesrgan import IOConsumer, PrefetchReader, RealESRGANer
from realesrgan.archs.srvgg_arch import SRVGGNetCompact
def main():
"""Inference demo for Real-ESRGAN.
It mainly for restoring anime videos.
"""
parser = argparse.ArgumentParser()
parser.add_argument('-i', '--input', type=str, default='inputs', help='Input video, image or folder')
parser.add_argument(
'-n',
'--model_name',
type=str,
default='realesr-animevideov3',
help=('Model names: realesr-animevideov3 | RealESRGAN_x4plus_anime_6B | RealESRGAN_x4plus | RealESRNet_x4plus |'
' RealESRGAN_x2plus | '
'Default:realesr-animevideov3'))
parser.add_argument('-o', '--output', type=str, default='results', help='Output folder')
parser.add_argument('-s', '--outscale', type=float, default=4, help='The final upsampling scale of the image')
parser.add_argument('--suffix', type=str, default='out', help='Suffix of the restored video')
parser.add_argument('-t', '--tile', type=int, default=0, help='Tile size, 0 for no tile during testing')
parser.add_argument('--tile_pad', type=int, default=10, help='Tile padding')
parser.add_argument('--pre_pad', type=int, default=0, help='Pre padding size at each border')
parser.add_argument('--face_enhance', action='store_true', help='Use GFPGAN to enhance face')
parser.add_argument(
'--fp32', action='store_true', help='Use fp32 precision during inference. Default: fp16 (half precision).')
parser.add_argument('--fps', type=float, default=None, help='FPS of the output video')
parser.add_argument('--consumer', type=int, default=4, help='Number of IO consumers')
parser.add_argument(
'--alpha_upsampler',
type=str,
default='realesrgan',
help='The upsampler for the alpha channels. Options: realesrgan | bicubic')
parser.add_argument(
'--ext',
type=str,
default='auto',
help='Image extension. Options: auto | jpg | png, auto means using the same extension as inputs')
args = parser.parse_args()
# ---------------------- determine models according to model names ---------------------- #
args.model_name = args.model_name.split('.pth')[0]
if args.model_name in ['RealESRGAN_x4plus', 'RealESRNet_x4plus']: # x4 RRDBNet model
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4)
netscale = 4
elif args.model_name in ['RealESRGAN_x4plus_anime_6B']: # x4 RRDBNet model with 6 blocks
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4)
netscale = 4
elif args.model_name in ['RealESRGAN_x2plus']: # x2 RRDBNet model
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=2)
netscale = 2
elif args.model_name in ['realesr-animevideov3']: # x4 VGG-style model (XS size)
model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu')
netscale = 4
# ---------------------- determine model paths ---------------------- #
model_path = os.path.join('experiments/pretrained_models', args.model_name + '.pth')
if not os.path.isfile(model_path):
model_path = os.path.join('realesrgan/weights', args.model_name + '.pth')
if not os.path.isfile(model_path):
raise ValueError(f'Model {args.model_name} does not exist.')
# restorer
upsampler = RealESRGANer(
scale=netscale,
model_path=model_path,
model=model,
tile=args.tile,
tile_pad=args.tile_pad,
pre_pad=args.pre_pad,
half=not args.fp32)
if args.face_enhance: # Use GFPGAN for face enhancement
from gfpgan import GFPGANer
face_enhancer = GFPGANer(
model_path='https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth',
upscale=args.outscale,
arch='clean',
channel_multiplier=2,
bg_upsampler=upsampler)
os.makedirs(args.output, exist_ok=True)
# for saving restored frames
save_frame_folder = os.path.join(args.output, 'frames_tmpout')
os.makedirs(save_frame_folder, exist_ok=True)
# input can be a video file / a folder of frames / an image
if mimetypes.guess_type(args.input)[0].startswith('video'): # is a video file
video_name = os.path.splitext(os.path.basename(args.input))[0]
frame_folder = os.path.join('tmp_frames', video_name)
os.makedirs(frame_folder, exist_ok=True)
# use ffmpeg to extract frames
os.system(f'ffmpeg -i {args.input} -qscale:v 1 -qmin 1 -qmax 1 -vsync 0 {frame_folder}/frame%08d.png')
# get image path list
paths = sorted(glob.glob(os.path.join(frame_folder, '*')))
# get input video fps
if args.fps is None:
import ffmpeg
probe = ffmpeg.probe(args.input)
video_streams = [stream for stream in probe['streams'] if stream['codec_type'] == 'video']
args.fps = eval(video_streams[0]['avg_frame_rate'])
elif mimetypes.guess_type(args.input)[0].startswith('image'): # is an image file
paths = [args.input]
video_name = 'video'
else:
paths = sorted(glob.glob(os.path.join(args.input, '*')))
video_name = 'video'
timer = AvgTimer()
timer.start()
pbar = tqdm(total=len(paths), unit='frame', desc='inference')
# set up prefetch reader
reader = PrefetchReader(paths, num_prefetch_queue=4)
reader.start()
que = queue.Queue()
consumers = [IOConsumer(args, que, f'IO_{i}') for i in range(args.consumer)]
for consumer in consumers:
consumer.start()
for idx, (path, img) in enumerate(zip(paths, reader)):
imgname, extension = os.path.splitext(os.path.basename(path))
if len(img.shape) == 3 and img.shape[2] == 4:
img_mode = 'RGBA'
else:
img_mode = None
try:
if args.face_enhance:
_, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True)
else:
output, _ = upsampler.enhance(img, outscale=args.outscale)
except RuntimeError as error:
print('Error', error)
print('If you encounter CUDA out of memory, try to set --tile with a smaller number.')
else:
if args.ext == 'auto':
extension = extension[1:]
else:
extension = args.ext
if img_mode == 'RGBA': # RGBA images should be saved in png format
extension = 'png'
save_path = os.path.join(save_frame_folder, f'{imgname}_out.{extension}')
que.put({'output': output, 'save_path': save_path})
pbar.update(1)
torch.cuda.synchronize()
timer.record()
avg_fps = 1. / (timer.get_avg_time() + 1e-7)
pbar.set_description(f'idx {idx}, fps {avg_fps:.2f}')
for _ in range(args.consumer):
que.put('quit')
for consumer in consumers:
consumer.join()
pbar.close()
# merge frames to video
video_save_path = os.path.join(args.output, f'{video_name}_{args.suffix}.mp4')
os.system(f'ffmpeg -r {args.fps} -i {save_frame_folder}/frame%08d_out.{extension} -i {args.input}'
f' -map 0:v:0 -map 1:a:0 -c:a copy -c:v libx264 -r {args.fps} -pix_fmt yuv420p {video_save_path}')
# delete tmp file
shutil.rmtree(save_frame_folder)
if os.path.isdir(frame_folder):
shutil.rmtree(frame_folder)
if __name__ == '__main__':
main()

Binary file not shown.

View File

@@ -0,0 +1,188 @@
# general settings
name: finetune_RealESRGANx4plus_400k
model_type: RealESRGANModel
scale: 4
num_gpu: auto
manual_seed: 0
# ----------------- options for synthesizing training data in RealESRGANModel ----------------- #
# USM the ground-truth
l1_gt_usm: True
percep_gt_usm: True
gan_gt_usm: False
# the first degradation process
resize_prob: [0.2, 0.7, 0.1] # up, down, keep
resize_range: [0.15, 1.5]
gaussian_noise_prob: 0.5
noise_range: [1, 30]
poisson_scale_range: [0.05, 3]
gray_noise_prob: 0.4
jpeg_range: [30, 95]
# the second degradation process
second_blur_prob: 0.8
resize_prob2: [0.3, 0.4, 0.3] # up, down, keep
resize_range2: [0.3, 1.2]
gaussian_noise_prob2: 0.5
noise_range2: [1, 25]
poisson_scale_range2: [0.05, 2.5]
gray_noise_prob2: 0.4
jpeg_range2: [30, 95]
gt_size: 256
queue_size: 180
# dataset and data loader settings
datasets:
train:
name: DF2K+OST
type: RealESRGANDataset
dataroot_gt: datasets/DF2K
meta_info: datasets/DF2K/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt
io_backend:
type: disk
blur_kernel_size: 21
kernel_list: ['iso', 'aniso', 'generalized_iso', 'generalized_aniso', 'plateau_iso', 'plateau_aniso']
kernel_prob: [0.45, 0.25, 0.12, 0.03, 0.12, 0.03]
sinc_prob: 0.1
blur_sigma: [0.2, 3]
betag_range: [0.5, 4]
betap_range: [1, 2]
blur_kernel_size2: 21
kernel_list2: ['iso', 'aniso', 'generalized_iso', 'generalized_aniso', 'plateau_iso', 'plateau_aniso']
kernel_prob2: [0.45, 0.25, 0.12, 0.03, 0.12, 0.03]
sinc_prob2: 0.1
blur_sigma2: [0.2, 1.5]
betag_range2: [0.5, 4]
betap_range2: [1, 2]
final_sinc_prob: 0.8
gt_size: 256
use_hflip: True
use_rot: False
# data loader
use_shuffle: true
num_worker_per_gpu: 5
batch_size_per_gpu: 12
dataset_enlarge_ratio: 1
prefetch_mode: ~
# Uncomment these for validation
# val:
# name: validation
# type: PairedImageDataset
# dataroot_gt: path_to_gt
# dataroot_lq: path_to_lq
# io_backend:
# type: disk
# network structures
network_g:
type: RRDBNet
num_in_ch: 3
num_out_ch: 3
num_feat: 64
num_block: 23
num_grow_ch: 32
network_d:
type: UNetDiscriminatorSN
num_in_ch: 3
num_feat: 64
skip_connection: True
# path
path:
# use the pre-trained Real-ESRNet model
pretrain_network_g: experiments/pretrained_models/RealESRNet_x4plus.pth
param_key_g: params_ema
strict_load_g: true
pretrain_network_d: experiments/pretrained_models/RealESRGAN_x4plus_netD.pth
param_key_d: params
strict_load_d: true
resume_state: ~
# training settings
train:
ema_decay: 0.999
optim_g:
type: Adam
lr: !!float 1e-4
weight_decay: 0
betas: [0.9, 0.99]
optim_d:
type: Adam
lr: !!float 1e-4
weight_decay: 0
betas: [0.9, 0.99]
scheduler:
type: MultiStepLR
milestones: [400000]
gamma: 0.5
total_iter: 400000
warmup_iter: -1 # no warm up
# losses
pixel_opt:
type: L1Loss
loss_weight: 1.0
reduction: mean
# perceptual loss (content and style losses)
perceptual_opt:
type: PerceptualLoss
layer_weights:
# before relu
'conv1_2': 0.1
'conv2_2': 0.1
'conv3_4': 1
'conv4_4': 1
'conv5_4': 1
vgg_type: vgg19
use_input_norm: true
perceptual_weight: !!float 1.0
style_weight: 0
range_norm: false
criterion: l1
# gan loss
gan_opt:
type: GANLoss
gan_type: vanilla
real_label_val: 1.0
fake_label_val: 0.0
loss_weight: !!float 1e-1
net_d_iters: 1
net_d_init_iters: 0
# Uncomment these for validation
# validation settings
# val:
# val_freq: !!float 5e3
# save_img: True
# metrics:
# psnr: # metric name
# type: calculate_psnr
# crop_border: 4
# test_y_channel: false
# logging settings
logger:
print_freq: 100
save_checkpoint_freq: !!float 5e3
use_tb_logger: true
wandb:
project: ~
resume_id: ~
# dist training settings
dist_params:
backend: nccl
port: 29500

View File

@@ -0,0 +1,150 @@
# general settings
name: finetune_RealESRGANx4plus_400k_pairdata
model_type: RealESRGANModel
scale: 4
num_gpu: auto
manual_seed: 0
# USM the ground-truth
l1_gt_usm: True
percep_gt_usm: True
gan_gt_usm: False
high_order_degradation: False # do not use the high-order degradation generation process
# dataset and data loader settings
datasets:
train:
name: DIV2K
type: RealESRGANPairedDataset
dataroot_gt: datasets/DF2K
dataroot_lq: datasets/DF2K
meta_info: datasets/DF2K/meta_info/meta_info_DIV2K_sub_pair.txt
io_backend:
type: disk
gt_size: 256
use_hflip: True
use_rot: False
# data loader
use_shuffle: true
num_worker_per_gpu: 5
batch_size_per_gpu: 12
dataset_enlarge_ratio: 1
prefetch_mode: ~
# Uncomment these for validation
# val:
# name: validation
# type: PairedImageDataset
# dataroot_gt: path_to_gt
# dataroot_lq: path_to_lq
# io_backend:
# type: disk
# network structures
network_g:
type: RRDBNet
num_in_ch: 3
num_out_ch: 3
num_feat: 64
num_block: 23
num_grow_ch: 32
network_d:
type: UNetDiscriminatorSN
num_in_ch: 3
num_feat: 64
skip_connection: True
# path
path:
# use the pre-trained Real-ESRNet model
pretrain_network_g: experiments/pretrained_models/RealESRNet_x4plus.pth
param_key_g: params_ema
strict_load_g: true
pretrain_network_d: experiments/pretrained_models/RealESRGAN_x4plus_netD.pth
param_key_d: params
strict_load_d: true
resume_state: ~
# training settings
train:
ema_decay: 0.999
optim_g:
type: Adam
lr: !!float 1e-4
weight_decay: 0
betas: [0.9, 0.99]
optim_d:
type: Adam
lr: !!float 1e-4
weight_decay: 0
betas: [0.9, 0.99]
scheduler:
type: MultiStepLR
milestones: [400000]
gamma: 0.5
total_iter: 400000
warmup_iter: -1 # no warm up
# losses
pixel_opt:
type: L1Loss
loss_weight: 1.0
reduction: mean
# perceptual loss (content and style losses)
perceptual_opt:
type: PerceptualLoss
layer_weights:
# before relu
'conv1_2': 0.1
'conv2_2': 0.1
'conv3_4': 1
'conv4_4': 1
'conv5_4': 1
vgg_type: vgg19
use_input_norm: true
perceptual_weight: !!float 1.0
style_weight: 0
range_norm: false
criterion: l1
# gan loss
gan_opt:
type: GANLoss
gan_type: vanilla
real_label_val: 1.0
fake_label_val: 0.0
loss_weight: !!float 1e-1
net_d_iters: 1
net_d_init_iters: 0
# Uncomment these for validation
# validation settings
# val:
# val_freq: !!float 5e3
# save_img: True
# metrics:
# psnr: # metric name
# type: calculate_psnr
# crop_border: 4
# test_y_channel: false
# logging settings
logger:
print_freq: 100
save_checkpoint_freq: !!float 5e3
use_tb_logger: true
wandb:
project: ~
resume_id: ~
# dist training settings
dist_params:
backend: nccl
port: 29500

View File

@@ -0,0 +1,186 @@
# general settings
name: train_RealESRGANx2plus_400k_B12G4
model_type: RealESRGANModel
scale: 2
num_gpu: auto # auto: can infer from your visible devices automatically. official: 4 GPUs
manual_seed: 0
# ----------------- options for synthesizing training data in RealESRGANModel ----------------- #
# USM the ground-truth
l1_gt_usm: True
percep_gt_usm: True
gan_gt_usm: False
# the first degradation process
resize_prob: [0.2, 0.7, 0.1] # up, down, keep
resize_range: [0.15, 1.5]
gaussian_noise_prob: 0.5
noise_range: [1, 30]
poisson_scale_range: [0.05, 3]
gray_noise_prob: 0.4
jpeg_range: [30, 95]
# the second degradation process
second_blur_prob: 0.8
resize_prob2: [0.3, 0.4, 0.3] # up, down, keep
resize_range2: [0.3, 1.2]
gaussian_noise_prob2: 0.5
noise_range2: [1, 25]
poisson_scale_range2: [0.05, 2.5]
gray_noise_prob2: 0.4
jpeg_range2: [30, 95]
gt_size: 256
queue_size: 180
# dataset and data loader settings
datasets:
train:
name: DF2K+OST
type: RealESRGANDataset
dataroot_gt: datasets/DF2K
meta_info: datasets/DF2K/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt
io_backend:
type: disk
blur_kernel_size: 21
kernel_list: ['iso', 'aniso', 'generalized_iso', 'generalized_aniso', 'plateau_iso', 'plateau_aniso']
kernel_prob: [0.45, 0.25, 0.12, 0.03, 0.12, 0.03]
sinc_prob: 0.1
blur_sigma: [0.2, 3]
betag_range: [0.5, 4]
betap_range: [1, 2]
blur_kernel_size2: 21
kernel_list2: ['iso', 'aniso', 'generalized_iso', 'generalized_aniso', 'plateau_iso', 'plateau_aniso']
kernel_prob2: [0.45, 0.25, 0.12, 0.03, 0.12, 0.03]
sinc_prob2: 0.1
blur_sigma2: [0.2, 1.5]
betag_range2: [0.5, 4]
betap_range2: [1, 2]
final_sinc_prob: 0.8
gt_size: 256
use_hflip: True
use_rot: False
# data loader
use_shuffle: true
num_worker_per_gpu: 5
batch_size_per_gpu: 12
dataset_enlarge_ratio: 1
prefetch_mode: ~
# Uncomment these for validation
# val:
# name: validation
# type: PairedImageDataset
# dataroot_gt: path_to_gt
# dataroot_lq: path_to_lq
# io_backend:
# type: disk
# network structures
network_g:
type: RRDBNet
num_in_ch: 3
num_out_ch: 3
num_feat: 64
num_block: 23
num_grow_ch: 32
scale: 2
network_d:
type: UNetDiscriminatorSN
num_in_ch: 3
num_feat: 64
skip_connection: True
# path
path:
# use the pre-trained Real-ESRNet model
pretrain_network_g: experiments/pretrained_models/RealESRNet_x2plus.pth
param_key_g: params_ema
strict_load_g: true
resume_state: ~
# training settings
train:
ema_decay: 0.999
optim_g:
type: Adam
lr: !!float 1e-4
weight_decay: 0
betas: [0.9, 0.99]
optim_d:
type: Adam
lr: !!float 1e-4
weight_decay: 0
betas: [0.9, 0.99]
scheduler:
type: MultiStepLR
milestones: [400000]
gamma: 0.5
total_iter: 400000
warmup_iter: -1 # no warm up
# losses
pixel_opt:
type: L1Loss
loss_weight: 1.0
reduction: mean
# perceptual loss (content and style losses)
perceptual_opt:
type: PerceptualLoss
layer_weights:
# before relu
'conv1_2': 0.1
'conv2_2': 0.1
'conv3_4': 1
'conv4_4': 1
'conv5_4': 1
vgg_type: vgg19
use_input_norm: true
perceptual_weight: !!float 1.0
style_weight: 0
range_norm: false
criterion: l1
# gan loss
gan_opt:
type: GANLoss
gan_type: vanilla
real_label_val: 1.0
fake_label_val: 0.0
loss_weight: !!float 1e-1
net_d_iters: 1
net_d_init_iters: 0
# Uncomment these for validation
# validation settings
# val:
# val_freq: !!float 5e3
# save_img: True
# metrics:
# psnr: # metric name
# type: calculate_psnr
# crop_border: 4
# test_y_channel: false
# logging settings
logger:
print_freq: 100
save_checkpoint_freq: !!float 5e3
use_tb_logger: true
wandb:
project: ~
resume_id: ~
# dist training settings
dist_params:
backend: nccl
port: 29500

View File

@@ -1,8 +1,8 @@
# general settings
name: train_RealESRGANx4plus_400k_B12G4_fromRealESRNet
name: train_RealESRGANx4plus_400k_B12G4
model_type: RealESRGANModel
scale: 4
num_gpu: 4
num_gpu: auto # auto: can infer from your visible devices automatically. official: 4 GPUs
manual_seed: 0
# ----------------- options for synthesizing training data in RealESRGANModel ----------------- #
@@ -39,7 +39,7 @@ datasets:
name: DF2K+OST
type: RealESRGANDataset
dataroot_gt: datasets/DF2K
meta_info: data/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt
meta_info: datasets/DF2K/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt
io_backend:
type: disk
@@ -90,7 +90,6 @@ network_g:
num_block: 23
num_grow_ch: 32
network_d:
type: UNetDiscriminatorSN
num_in_ch: 3
@@ -100,7 +99,7 @@ network_d:
# path
path:
# use the pre-trained Real-ESRNet model
pretrain_network_g: experiments/train_RealESRNetx4plus_1000k_B12G4_fromESRGAN/model/net_g_1000000.pth
pretrain_network_g: experiments/pretrained_models/RealESRNet_x4plus.pth
param_key_g: params_ema
strict_load_g: true
resume_state: ~
@@ -166,7 +165,7 @@ train:
# save_img: True
# metrics:
# psnr: # metric name, can be arbitrary
# psnr: # metric name
# type: calculate_psnr
# crop_border: 4
# test_y_channel: false

View File

@@ -0,0 +1,145 @@
# general settings
name: train_RealESRNetx2plus_1000k_B12G4
model_type: RealESRNetModel
scale: 2
num_gpu: auto # auto: can infer from your visible devices automatically. official: 4 GPUs
manual_seed: 0
# ----------------- options for synthesizing training data in RealESRNetModel ----------------- #
gt_usm: True # USM the ground-truth
# the first degradation process
resize_prob: [0.2, 0.7, 0.1] # up, down, keep
resize_range: [0.15, 1.5]
gaussian_noise_prob: 0.5
noise_range: [1, 30]
poisson_scale_range: [0.05, 3]
gray_noise_prob: 0.4
jpeg_range: [30, 95]
# the second degradation process
second_blur_prob: 0.8
resize_prob2: [0.3, 0.4, 0.3] # up, down, keep
resize_range2: [0.3, 1.2]
gaussian_noise_prob2: 0.5
noise_range2: [1, 25]
poisson_scale_range2: [0.05, 2.5]
gray_noise_prob2: 0.4
jpeg_range2: [30, 95]
gt_size: 256
queue_size: 180
# dataset and data loader settings
datasets:
train:
name: DF2K+OST
type: RealESRGANDataset
dataroot_gt: datasets/DF2K
meta_info: datasets/DF2K/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt
io_backend:
type: disk
blur_kernel_size: 21
kernel_list: ['iso', 'aniso', 'generalized_iso', 'generalized_aniso', 'plateau_iso', 'plateau_aniso']
kernel_prob: [0.45, 0.25, 0.12, 0.03, 0.12, 0.03]
sinc_prob: 0.1
blur_sigma: [0.2, 3]
betag_range: [0.5, 4]
betap_range: [1, 2]
blur_kernel_size2: 21
kernel_list2: ['iso', 'aniso', 'generalized_iso', 'generalized_aniso', 'plateau_iso', 'plateau_aniso']
kernel_prob2: [0.45, 0.25, 0.12, 0.03, 0.12, 0.03]
sinc_prob2: 0.1
blur_sigma2: [0.2, 1.5]
betag_range2: [0.5, 4]
betap_range2: [1, 2]
final_sinc_prob: 0.8
gt_size: 256
use_hflip: True
use_rot: False
# data loader
use_shuffle: true
num_worker_per_gpu: 5
batch_size_per_gpu: 12
dataset_enlarge_ratio: 1
prefetch_mode: ~
# Uncomment these for validation
# val:
# name: validation
# type: PairedImageDataset
# dataroot_gt: path_to_gt
# dataroot_lq: path_to_lq
# io_backend:
# type: disk
# network structures
network_g:
type: RRDBNet
num_in_ch: 3
num_out_ch: 3
num_feat: 64
num_block: 23
num_grow_ch: 32
scale: 2
# path
path:
pretrain_network_g: experiments/pretrained_models/RealESRGAN_x4plus.pth
param_key_g: params_ema
strict_load_g: False
resume_state: ~
# training settings
train:
ema_decay: 0.999
optim_g:
type: Adam
lr: !!float 2e-4
weight_decay: 0
betas: [0.9, 0.99]
scheduler:
type: MultiStepLR
milestones: [1000000]
gamma: 0.5
total_iter: 1000000
warmup_iter: -1 # no warm up
# losses
pixel_opt:
type: L1Loss
loss_weight: 1.0
reduction: mean
# Uncomment these for validation
# validation settings
# val:
# val_freq: !!float 5e3
# save_img: True
# metrics:
# psnr: # metric name
# type: calculate_psnr
# crop_border: 4
# test_y_channel: false
# logging settings
logger:
print_freq: 100
save_checkpoint_freq: !!float 5e3
use_tb_logger: true
wandb:
project: ~
resume_id: ~
# dist training settings
dist_params:
backend: nccl
port: 29500

View File

@@ -1,8 +1,8 @@
# general settings
name: train_RealESRNetx4plus_1000k_B12G4_fromESRGAN
name: train_RealESRNetx4plus_1000k_B12G4
model_type: RealESRNetModel
scale: 4
num_gpu: 4
num_gpu: auto # auto: can infer from your visible devices automatically. official: 4 GPUs
manual_seed: 0
# ----------------- options for synthesizing training data in RealESRNetModel ----------------- #
@@ -36,7 +36,7 @@ datasets:
name: DF2K+OST
type: RealESRGANDataset
dataroot_gt: datasets/DF2K
meta_info: data/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt
meta_info: datasets/DF2K/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt
io_backend:
type: disk
@@ -124,7 +124,7 @@ train:
# save_img: True
# metrics:
# psnr: # metric name, can be arbitrary
# psnr: # metric name
# type: calculate_psnr
# crop_border: 4
# test_y_channel: false

6
realesrgan/__init__.py Normal file
View File

@@ -0,0 +1,6 @@
# flake8: noqa
from .archs import *
from .data import *
from .models import *
from .utils import *
from .version import *

View File

@@ -7,4 +7,4 @@ from os import path as osp
arch_folder = osp.dirname(osp.abspath(__file__))
arch_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(arch_folder) if v.endswith('_arch.py')]
# import all the arch modules
_arch_modules = [importlib.import_module(f'archs.{file_name}') for file_name in arch_filenames]
_arch_modules = [importlib.import_module(f'realesrgan.archs.{file_name}') for file_name in arch_filenames]

View File

@@ -6,15 +6,23 @@ from torch.nn.utils import spectral_norm
@ARCH_REGISTRY.register()
class UNetDiscriminatorSN(nn.Module):
"""Defines a U-Net discriminator with spectral normalization (SN)"""
"""Defines a U-Net discriminator with spectral normalization (SN)
It is used in Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data.
Arg:
num_in_ch (int): Channel number of inputs. Default: 3.
num_feat (int): Channel number of base intermediate features. Default: 64.
skip_connection (bool): Whether to use skip connections between U-Net. Default: True.
"""
def __init__(self, num_in_ch, num_feat=64, skip_connection=True):
super(UNetDiscriminatorSN, self).__init__()
self.skip_connection = skip_connection
norm = spectral_norm
# the first convolution
self.conv0 = nn.Conv2d(num_in_ch, num_feat, kernel_size=3, stride=1, padding=1)
# downsample
self.conv1 = norm(nn.Conv2d(num_feat, num_feat * 2, 4, 2, 1, bias=False))
self.conv2 = norm(nn.Conv2d(num_feat * 2, num_feat * 4, 4, 2, 1, bias=False))
self.conv3 = norm(nn.Conv2d(num_feat * 4, num_feat * 8, 4, 2, 1, bias=False))
@@ -22,14 +30,13 @@ class UNetDiscriminatorSN(nn.Module):
self.conv4 = norm(nn.Conv2d(num_feat * 8, num_feat * 4, 3, 1, 1, bias=False))
self.conv5 = norm(nn.Conv2d(num_feat * 4, num_feat * 2, 3, 1, 1, bias=False))
self.conv6 = norm(nn.Conv2d(num_feat * 2, num_feat, 3, 1, 1, bias=False))
# extra
# extra convolutions
self.conv7 = norm(nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=False))
self.conv8 = norm(nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=False))
self.conv9 = nn.Conv2d(num_feat, 1, 3, 1, 1)
def forward(self, x):
# downsample
x0 = F.leaky_relu(self.conv0(x), negative_slope=0.2, inplace=True)
x1 = F.leaky_relu(self.conv1(x0), negative_slope=0.2, inplace=True)
x2 = F.leaky_relu(self.conv2(x1), negative_slope=0.2, inplace=True)
@@ -52,7 +59,7 @@ class UNetDiscriminatorSN(nn.Module):
if self.skip_connection:
x6 = x6 + x0
# extra
# extra convolutions
out = F.leaky_relu(self.conv7(x6), negative_slope=0.2, inplace=True)
out = F.leaky_relu(self.conv8(out), negative_slope=0.2, inplace=True)
out = self.conv9(out)

View File

@@ -0,0 +1,69 @@
from basicsr.utils.registry import ARCH_REGISTRY
from torch import nn as nn
from torch.nn import functional as F
@ARCH_REGISTRY.register()
class SRVGGNetCompact(nn.Module):
"""A compact VGG-style network structure for super-resolution.
It is a compact network structure, which performs upsampling in the last layer and no convolution is
conducted on the HR feature space.
Args:
num_in_ch (int): Channel number of inputs. Default: 3.
num_out_ch (int): Channel number of outputs. Default: 3.
num_feat (int): Channel number of intermediate features. Default: 64.
num_conv (int): Number of convolution layers in the body network. Default: 16.
upscale (int): Upsampling factor. Default: 4.
act_type (str): Activation type, options: 'relu', 'prelu', 'leakyrelu'. Default: prelu.
"""
def __init__(self, num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu'):
super(SRVGGNetCompact, self).__init__()
self.num_in_ch = num_in_ch
self.num_out_ch = num_out_ch
self.num_feat = num_feat
self.num_conv = num_conv
self.upscale = upscale
self.act_type = act_type
self.body = nn.ModuleList()
# the first conv
self.body.append(nn.Conv2d(num_in_ch, num_feat, 3, 1, 1))
# the first activation
if act_type == 'relu':
activation = nn.ReLU(inplace=True)
elif act_type == 'prelu':
activation = nn.PReLU(num_parameters=num_feat)
elif act_type == 'leakyrelu':
activation = nn.LeakyReLU(negative_slope=0.1, inplace=True)
self.body.append(activation)
# the body structure
for _ in range(num_conv):
self.body.append(nn.Conv2d(num_feat, num_feat, 3, 1, 1))
# activation
if act_type == 'relu':
activation = nn.ReLU(inplace=True)
elif act_type == 'prelu':
activation = nn.PReLU(num_parameters=num_feat)
elif act_type == 'leakyrelu':
activation = nn.LeakyReLU(negative_slope=0.1, inplace=True)
self.body.append(activation)
# the last conv
self.body.append(nn.Conv2d(num_feat, num_out_ch * upscale * upscale, 3, 1, 1))
# upsample
self.upsampler = nn.PixelShuffle(upscale)
def forward(self, x):
out = x
for i in range(0, len(self.body)):
out = self.body[i](out)
out = self.upsampler(out)
# add the nearest upsampled image, so that the network learns the residual
base = F.interpolate(x, scale_factor=self.upscale, mode='nearest')
out += base
return out

View File

@@ -7,4 +7,4 @@ from os import path as osp
data_folder = osp.dirname(osp.abspath(__file__))
dataset_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(data_folder) if v.endswith('_dataset.py')]
# import all the dataset modules
_dataset_modules = [importlib.import_module(f'data.{file_name}') for file_name in dataset_filenames]
_dataset_modules = [importlib.import_module(f'realesrgan.data.{file_name}') for file_name in dataset_filenames]

View File

@@ -15,18 +15,31 @@ from torch.utils import data as data
@DATASET_REGISTRY.register()
class RealESRGANDataset(data.Dataset):
"""
Dataset used for Real-ESRGAN model.
"""Dataset used for Real-ESRGAN model:
Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data.
It loads gt (Ground-Truth) images, and augments them.
It also generates blur kernels and sinc kernels for generating low-quality images.
Note that the low-quality images are processed in tensors on GPUS for faster processing.
Args:
opt (dict): Config for train datasets. It contains the following keys:
dataroot_gt (str): Data root path for gt.
meta_info (str): Path for meta information file.
io_backend (dict): IO backend type and other kwarg.
use_hflip (bool): Use horizontal flips.
use_rot (bool): Use rotation (use vertical flip and transposing h and w for implementation).
Please see more options in the codes.
"""
def __init__(self, opt):
super(RealESRGANDataset, self).__init__()
self.opt = opt
# file client (io backend)
self.file_client = None
self.io_backend_opt = opt['io_backend']
self.gt_folder = opt['dataroot_gt']
# file client (lmdb io backend)
if self.io_backend_opt['type'] == 'lmdb':
self.io_backend_opt['db_paths'] = [self.gt_folder]
self.io_backend_opt['client_keys'] = ['gt']
@@ -35,18 +48,20 @@ class RealESRGANDataset(data.Dataset):
with open(osp.join(self.gt_folder, 'meta_info.txt')) as fin:
self.paths = [line.split('.')[0] for line in fin]
else:
# disk backend with meta_info
# Each line in the meta_info describes the relative path to an image
with open(self.opt['meta_info']) as fin:
paths = [line.strip() for line in fin]
paths = [line.strip().split(' ')[0] for line in fin]
self.paths = [os.path.join(self.gt_folder, v) for v in paths]
# blur settings for the first degradation
self.blur_kernel_size = opt['blur_kernel_size']
self.kernel_list = opt['kernel_list']
self.kernel_prob = opt['kernel_prob']
self.kernel_prob = opt['kernel_prob'] # a list for each kernel probability
self.blur_sigma = opt['blur_sigma']
self.betag_range = opt['betag_range']
self.betap_range = opt['betap_range']
self.sinc_prob = opt['sinc_prob']
self.betag_range = opt['betag_range'] # betag used in generalized Gaussian blur kernels
self.betap_range = opt['betap_range'] # betap used in plateau blur kernels
self.sinc_prob = opt['sinc_prob'] # the probability for sinc filters
# blur settings for the second degradation
self.blur_kernel_size2 = opt['blur_kernel_size2']
@@ -61,6 +76,7 @@ class RealESRGANDataset(data.Dataset):
self.final_sinc_prob = opt['final_sinc_prob']
self.kernel_range = [2 * v + 1 for v in range(3, 11)] # kernel size ranges from 7 to 21
# TODO: kernel range is now hard-coded, should be in the configure file
self.pulse_tensor = torch.zeros(21, 21).float() # convolving with pulse tensor brings no blurry effect
self.pulse_tensor[10, 10] = 1
@@ -76,7 +92,7 @@ class RealESRGANDataset(data.Dataset):
while retry > 0:
try:
img_bytes = self.file_client.get(gt_path, 'gt')
except Exception as e:
except (IOError, OSError) as e:
logger = get_root_logger()
logger.warn(f'File client error: {e}, remaining retry times: {retry - 1}')
# change another file to read
@@ -89,10 +105,11 @@ class RealESRGANDataset(data.Dataset):
retry -= 1
img_gt = imfrombytes(img_bytes, float32=True)
# -------------------- augmentation for training: flip, rotation -------------------- #
# -------------------- Do augmentation for training: flip, rotation -------------------- #
img_gt = augment(img_gt, self.opt['use_hflip'], self.opt['use_rot'])
# crop or pad to 400: 400 is hard-coded. You may change it accordingly
# crop or pad to 400
# TODO: 400 is hard-coded. You may change it accordingly
h, w = img_gt.shape[0:2]
crop_pad_size = 400
# pad
@@ -154,7 +171,7 @@ class RealESRGANDataset(data.Dataset):
pad_size = (21 - kernel_size) // 2
kernel2 = np.pad(kernel2, ((pad_size, pad_size), (pad_size, pad_size)))
# ------------------------------------- sinc kernel ------------------------------------- #
# ------------------------------------- the final sinc kernel ------------------------------------- #
if np.random.uniform() < self.opt['final_sinc_prob']:
kernel_size = random.choice(self.kernel_range)
omega_c = np.random.uniform(np.pi / 3, np.pi)

View File

@@ -0,0 +1,108 @@
import os
from basicsr.data.data_util import paired_paths_from_folder, paired_paths_from_lmdb
from basicsr.data.transforms import augment, paired_random_crop
from basicsr.utils import FileClient, imfrombytes, img2tensor
from basicsr.utils.registry import DATASET_REGISTRY
from torch.utils import data as data
from torchvision.transforms.functional import normalize
@DATASET_REGISTRY.register()
class RealESRGANPairedDataset(data.Dataset):
"""Paired image dataset for image restoration.
Read LQ (Low Quality, e.g. LR (Low Resolution), blurry, noisy, etc) and GT image pairs.
There are three modes:
1. 'lmdb': Use lmdb files.
If opt['io_backend'] == lmdb.
2. 'meta_info': Use meta information file to generate paths.
If opt['io_backend'] != lmdb and opt['meta_info'] is not None.
3. 'folder': Scan folders to generate paths.
The rest.
Args:
opt (dict): Config for train datasets. It contains the following keys:
dataroot_gt (str): Data root path for gt.
dataroot_lq (str): Data root path for lq.
meta_info (str): Path for meta information file.
io_backend (dict): IO backend type and other kwarg.
filename_tmpl (str): Template for each filename. Note that the template excludes the file extension.
Default: '{}'.
gt_size (int): Cropped patched size for gt patches.
use_hflip (bool): Use horizontal flips.
use_rot (bool): Use rotation (use vertical flip and transposing h
and w for implementation).
scale (bool): Scale, which will be added automatically.
phase (str): 'train' or 'val'.
"""
def __init__(self, opt):
super(RealESRGANPairedDataset, self).__init__()
self.opt = opt
self.file_client = None
self.io_backend_opt = opt['io_backend']
# mean and std for normalizing the input images
self.mean = opt['mean'] if 'mean' in opt else None
self.std = opt['std'] if 'std' in opt else None
self.gt_folder, self.lq_folder = opt['dataroot_gt'], opt['dataroot_lq']
self.filename_tmpl = opt['filename_tmpl'] if 'filename_tmpl' in opt else '{}'
# file client (lmdb io backend)
if self.io_backend_opt['type'] == 'lmdb':
self.io_backend_opt['db_paths'] = [self.lq_folder, self.gt_folder]
self.io_backend_opt['client_keys'] = ['lq', 'gt']
self.paths = paired_paths_from_lmdb([self.lq_folder, self.gt_folder], ['lq', 'gt'])
elif 'meta_info' in self.opt and self.opt['meta_info'] is not None:
# disk backend with meta_info
# Each line in the meta_info describes the relative path to an image
with open(self.opt['meta_info']) as fin:
paths = [line.strip() for line in fin]
self.paths = []
for path in paths:
gt_path, lq_path = path.split(', ')
gt_path = os.path.join(self.gt_folder, gt_path)
lq_path = os.path.join(self.lq_folder, lq_path)
self.paths.append(dict([('gt_path', gt_path), ('lq_path', lq_path)]))
else:
# disk backend
# it will scan the whole folder to get meta info
# it will be time-consuming for folders with too many files. It is recommended using an extra meta txt file
self.paths = paired_paths_from_folder([self.lq_folder, self.gt_folder], ['lq', 'gt'], self.filename_tmpl)
def __getitem__(self, index):
if self.file_client is None:
self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt)
scale = self.opt['scale']
# Load gt and lq images. Dimension order: HWC; channel order: BGR;
# image range: [0, 1], float32.
gt_path = self.paths[index]['gt_path']
img_bytes = self.file_client.get(gt_path, 'gt')
img_gt = imfrombytes(img_bytes, float32=True)
lq_path = self.paths[index]['lq_path']
img_bytes = self.file_client.get(lq_path, 'lq')
img_lq = imfrombytes(img_bytes, float32=True)
# augmentation for training
if self.opt['phase'] == 'train':
gt_size = self.opt['gt_size']
# random crop
img_gt, img_lq = paired_random_crop(img_gt, img_lq, gt_size, scale, gt_path)
# flip, rotation
img_gt, img_lq = augment([img_gt, img_lq], self.opt['use_hflip'], self.opt['use_rot'])
# BGR to RGB, HWC to CHW, numpy to tensor
img_gt, img_lq = img2tensor([img_gt, img_lq], bgr2rgb=True, float32=True)
# normalize
if self.mean is not None or self.std is not None:
normalize(img_lq, self.mean, self.std, inplace=True)
normalize(img_gt, self.mean, self.std, inplace=True)
return {'lq': img_lq, 'gt': img_gt, 'lq_path': lq_path, 'gt_path': gt_path}
def __len__(self):
return len(self.paths)

View File

@@ -7,4 +7,4 @@ from os import path as osp
model_folder = osp.dirname(osp.abspath(__file__))
model_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(model_folder) if v.endswith('_model.py')]
# import all the model modules
_model_modules = [importlib.import_module(f'models.{file_name}') for file_name in model_filenames]
_model_modules = [importlib.import_module(f'realesrgan.models.{file_name}') for file_name in model_filenames]

View File

@@ -13,35 +13,45 @@ from torch.nn import functional as F
@MODEL_REGISTRY.register()
class RealESRGANModel(SRGANModel):
"""RealESRGAN Model"""
"""RealESRGAN Model for Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data.
It mainly performs:
1. randomly synthesize LQ images in GPU tensors
2. optimize the networks with GAN training.
"""
def __init__(self, opt):
super(RealESRGANModel, self).__init__(opt)
self.jpeger = DiffJPEG(differentiable=False).cuda()
self.usm_sharpener = USMSharp().cuda()
self.queue_size = opt['queue_size']
self.jpeger = DiffJPEG(differentiable=False).cuda() # simulate JPEG compression artifacts
self.usm_sharpener = USMSharp().cuda() # do usm sharpening
self.queue_size = opt.get('queue_size', 180)
@torch.no_grad()
def _dequeue_and_enqueue(self):
# training pair pool
"""It is the training pair pool for increasing the diversity in a batch.
Batch processing limits the diversity of synthetic degradations in a batch. For example, samples in a
batch could not have different resize scaling factors. Therefore, we employ this training pair pool
to increase the degradation diversity in a batch.
"""
# initialize
b, c, h, w = self.lq.size()
if not hasattr(self, 'queue_lr'):
assert self.queue_size % b == 0, 'queue size should be divisible by batch size'
assert self.queue_size % b == 0, f'queue size {self.queue_size} should be divisible by batch size {b}'
self.queue_lr = torch.zeros(self.queue_size, c, h, w).cuda()
_, c, h, w = self.gt.size()
self.queue_gt = torch.zeros(self.queue_size, c, h, w).cuda()
self.queue_ptr = 0
if self.queue_ptr == self.queue_size: # full
if self.queue_ptr == self.queue_size: # the pool is full
# do dequeue and enqueue
# shuffle
idx = torch.randperm(self.queue_size)
self.queue_lr = self.queue_lr[idx]
self.queue_gt = self.queue_gt[idx]
# get
# get first b samples
lq_dequeue = self.queue_lr[0:b, :, :, :].clone()
gt_dequeue = self.queue_gt[0:b, :, :, :].clone()
# update
# update the queue
self.queue_lr[0:b, :, :, :] = self.lq.clone()
self.queue_gt[0:b, :, :, :] = self.gt.clone()
@@ -55,7 +65,9 @@ class RealESRGANModel(SRGANModel):
@torch.no_grad()
def feed_data(self, data):
if self.is_train:
"""Accept data from dataloader, and then add two-order degradations to obtain LQ images.
"""
if self.is_train and self.opt.get('high_order_degradation', True):
# training data synthesis
self.gt = data['gt'].to(self.device)
self.gt_usm = self.usm_sharpener(self.gt)
@@ -79,7 +91,7 @@ class RealESRGANModel(SRGANModel):
scale = 1
mode = random.choice(['area', 'bilinear', 'bicubic'])
out = F.interpolate(out, scale_factor=scale, mode=mode)
# noise
# add noise
gray_noise_prob = self.opt['gray_noise_prob']
if np.random.uniform() < self.opt['gaussian_noise_prob']:
out = random_add_gaussian_noise_pt(
@@ -93,7 +105,7 @@ class RealESRGANModel(SRGANModel):
rounds=False)
# JPEG compression
jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range'])
out = torch.clamp(out, 0, 1)
out = torch.clamp(out, 0, 1) # clamp to [0, 1], otherwise JPEGer will result in unpleasant artifacts
out = self.jpeger(out, quality=jpeg_p)
# ----------------------- The second degradation process ----------------------- #
@@ -111,7 +123,7 @@ class RealESRGANModel(SRGANModel):
mode = random.choice(['area', 'bilinear', 'bicubic'])
out = F.interpolate(
out, size=(int(ori_h / self.opt['scale'] * scale), int(ori_w / self.opt['scale'] * scale)), mode=mode)
# noise
# add noise
gray_noise_prob = self.opt['gray_noise_prob2']
if np.random.uniform() < self.opt['gaussian_noise_prob2']:
out = random_add_gaussian_noise_pt(
@@ -162,10 +174,13 @@ class RealESRGANModel(SRGANModel):
self._dequeue_and_enqueue()
# sharpen self.gt again, as we have changed the self.gt with self._dequeue_and_enqueue
self.gt_usm = self.usm_sharpener(self.gt)
self.lq = self.lq.contiguous() # for the warning: grad and param do not obey the gradient layout contract
else:
# for paired training or validation
self.lq = data['lq'].to(self.device)
if 'gt' in data:
self.gt = data['gt'].to(self.device)
self.gt_usm = self.usm_sharpener(self.gt)
def nondist_validation(self, dataloader, current_iter, tb_logger, save_img):
# do not use the synthetic process during validation
@@ -174,6 +189,7 @@ class RealESRGANModel(SRGANModel):
self.is_train = True
def optimize_parameters(self, current_iter):
# usm sharpening
l1_gt = self.gt_usm
percep_gt = self.gt_usm
gan_gt = self.gt_usm

View File

@@ -12,35 +12,46 @@ from torch.nn import functional as F
@MODEL_REGISTRY.register()
class RealESRNetModel(SRModel):
"""RealESRNet Model"""
"""RealESRNet Model for Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data.
It is trained without GAN losses.
It mainly performs:
1. randomly synthesize LQ images in GPU tensors
2. optimize the networks with GAN training.
"""
def __init__(self, opt):
super(RealESRNetModel, self).__init__(opt)
self.jpeger = DiffJPEG(differentiable=False).cuda()
self.usm_sharpener = USMSharp().cuda()
self.queue_size = opt['queue_size']
self.jpeger = DiffJPEG(differentiable=False).cuda() # simulate JPEG compression artifacts
self.usm_sharpener = USMSharp().cuda() # do usm sharpening
self.queue_size = opt.get('queue_size', 180)
@torch.no_grad()
def _dequeue_and_enqueue(self):
# training pair pool
"""It is the training pair pool for increasing the diversity in a batch.
Batch processing limits the diversity of synthetic degradations in a batch. For example, samples in a
batch could not have different resize scaling factors. Therefore, we employ this training pair pool
to increase the degradation diversity in a batch.
"""
# initialize
b, c, h, w = self.lq.size()
if not hasattr(self, 'queue_lr'):
assert self.queue_size % b == 0, 'queue size should be divisible by batch size'
assert self.queue_size % b == 0, f'queue size {self.queue_size} should be divisible by batch size {b}'
self.queue_lr = torch.zeros(self.queue_size, c, h, w).cuda()
_, c, h, w = self.gt.size()
self.queue_gt = torch.zeros(self.queue_size, c, h, w).cuda()
self.queue_ptr = 0
if self.queue_ptr == self.queue_size: # full
if self.queue_ptr == self.queue_size: # the pool is full
# do dequeue and enqueue
# shuffle
idx = torch.randperm(self.queue_size)
self.queue_lr = self.queue_lr[idx]
self.queue_gt = self.queue_gt[idx]
# get
# get first b samples
lq_dequeue = self.queue_lr[0:b, :, :, :].clone()
gt_dequeue = self.queue_gt[0:b, :, :, :].clone()
# update
# update the queue
self.queue_lr[0:b, :, :, :] = self.lq.clone()
self.queue_gt[0:b, :, :, :] = self.gt.clone()
@@ -54,10 +65,12 @@ class RealESRNetModel(SRModel):
@torch.no_grad()
def feed_data(self, data):
if self.is_train:
"""Accept data from dataloader, and then add two-order degradations to obtain LQ images.
"""
if self.is_train and self.opt.get('high_order_degradation', True):
# training data synthesis
self.gt = data['gt'].to(self.device)
# USM the GT images
# USM sharpen the GT images
if self.opt['gt_usm'] is True:
self.gt = self.usm_sharpener(self.gt)
@@ -80,7 +93,7 @@ class RealESRNetModel(SRModel):
scale = 1
mode = random.choice(['area', 'bilinear', 'bicubic'])
out = F.interpolate(out, scale_factor=scale, mode=mode)
# noise
# add noise
gray_noise_prob = self.opt['gray_noise_prob']
if np.random.uniform() < self.opt['gaussian_noise_prob']:
out = random_add_gaussian_noise_pt(
@@ -94,7 +107,7 @@ class RealESRNetModel(SRModel):
rounds=False)
# JPEG compression
jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range'])
out = torch.clamp(out, 0, 1)
out = torch.clamp(out, 0, 1) # clamp to [0, 1], otherwise JPEGer will result in unpleasant artifacts
out = self.jpeger(out, quality=jpeg_p)
# ----------------------- The second degradation process ----------------------- #
@@ -112,7 +125,7 @@ class RealESRNetModel(SRModel):
mode = random.choice(['area', 'bilinear', 'bicubic'])
out = F.interpolate(
out, size=(int(ori_h / self.opt['scale'] * scale), int(ori_w / self.opt['scale'] * scale)), mode=mode)
# noise
# add noise
gray_noise_prob = self.opt['gray_noise_prob2']
if np.random.uniform() < self.opt['gaussian_noise_prob2']:
out = random_add_gaussian_noise_pt(
@@ -160,10 +173,13 @@ class RealESRNetModel(SRModel):
# training pair pool
self._dequeue_and_enqueue()
self.lq = self.lq.contiguous() # for the warning: grad and param do not obey the gradient layout contract
else:
# for paired training or validation
self.lq = data['lq'].to(self.device)
if 'gt' in data:
self.gt = data['gt'].to(self.device)
self.gt_usm = self.usm_sharpener(self.gt)
def nondist_validation(self, dataloader, current_iter, tb_logger, save_img):
# do not use the synthetic process during validation

11
realesrgan/train.py Normal file
View File

@@ -0,0 +1,11 @@
# flake8: noqa
import os.path as osp
from basicsr.train import train_pipeline
import realesrgan.archs
import realesrgan.data
import realesrgan.models
if __name__ == '__main__':
root_path = osp.abspath(osp.join(__file__, osp.pardir, osp.pardir))
train_pipeline(root_path)

280
realesrgan/utils.py Normal file
View File

@@ -0,0 +1,280 @@
import cv2
import math
import numpy as np
import os
import queue
import threading
import torch
from basicsr.utils.download_util import load_file_from_url
from torch.nn import functional as F
ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
class RealESRGANer():
"""A helper class for upsampling images with RealESRGAN.
Args:
scale (int): Upsampling scale factor used in the networks. It is usually 2 or 4.
model_path (str): The path to the pretrained model. It can be urls (will first download it automatically).
model (nn.Module): The defined network. Default: None.
tile (int): As too large images result in the out of GPU memory issue, so this tile option will first crop
input images into tiles, and then process each of them. Finally, they will be merged into one image.
0 denotes for do not use tile. Default: 0.
tile_pad (int): The pad size for each tile, to remove border artifacts. Default: 10.
pre_pad (int): Pad the input images to avoid border artifacts. Default: 10.
half (float): Whether to use half precision during inference. Default: False.
"""
def __init__(self, scale, model_path, model=None, tile=0, tile_pad=10, pre_pad=10, half=False):
self.scale = scale
self.tile_size = tile
self.tile_pad = tile_pad
self.pre_pad = pre_pad
self.mod_scale = None
self.half = half
# initialize model
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# if the model_path starts with https, it will first download models to the folder: realesrgan/weights
if model_path.startswith('https://'):
model_path = load_file_from_url(
url=model_path, model_dir=os.path.join(ROOT_DIR, 'realesrgan/weights'), progress=True, file_name=None)
loadnet = torch.load(model_path, map_location=torch.device('cpu'))
# prefer to use params_ema
if 'params_ema' in loadnet:
keyname = 'params_ema'
else:
keyname = 'params'
model.load_state_dict(loadnet[keyname], strict=True)
model.eval()
self.model = model.to(self.device)
if self.half:
self.model = self.model.half()
def pre_process(self, img):
"""Pre-process, such as pre-pad and mod pad, so that the images can be divisible
"""
img = torch.from_numpy(np.transpose(img, (2, 0, 1))).float()
self.img = img.unsqueeze(0).to(self.device)
if self.half:
self.img = self.img.half()
# pre_pad
if self.pre_pad != 0:
self.img = F.pad(self.img, (0, self.pre_pad, 0, self.pre_pad), 'reflect')
# mod pad for divisible borders
if self.scale == 2:
self.mod_scale = 2
elif self.scale == 1:
self.mod_scale = 4
if self.mod_scale is not None:
self.mod_pad_h, self.mod_pad_w = 0, 0
_, _, h, w = self.img.size()
if (h % self.mod_scale != 0):
self.mod_pad_h = (self.mod_scale - h % self.mod_scale)
if (w % self.mod_scale != 0):
self.mod_pad_w = (self.mod_scale - w % self.mod_scale)
self.img = F.pad(self.img, (0, self.mod_pad_w, 0, self.mod_pad_h), 'reflect')
def process(self):
# model inference
self.output = self.model(self.img)
def tile_process(self):
"""It will first crop input images to tiles, and then process each tile.
Finally, all the processed tiles are merged into one images.
Modified from: https://github.com/ata4/esrgan-launcher
"""
batch, channel, height, width = self.img.shape
output_height = height * self.scale
output_width = width * self.scale
output_shape = (batch, channel, output_height, output_width)
# start with black image
self.output = self.img.new_zeros(output_shape)
tiles_x = math.ceil(width / self.tile_size)
tiles_y = math.ceil(height / self.tile_size)
# loop over all tiles
for y in range(tiles_y):
for x in range(tiles_x):
# extract tile from input image
ofs_x = x * self.tile_size
ofs_y = y * self.tile_size
# input tile area on total image
input_start_x = ofs_x
input_end_x = min(ofs_x + self.tile_size, width)
input_start_y = ofs_y
input_end_y = min(ofs_y + self.tile_size, height)
# input tile area on total image with padding
input_start_x_pad = max(input_start_x - self.tile_pad, 0)
input_end_x_pad = min(input_end_x + self.tile_pad, width)
input_start_y_pad = max(input_start_y - self.tile_pad, 0)
input_end_y_pad = min(input_end_y + self.tile_pad, height)
# input tile dimensions
input_tile_width = input_end_x - input_start_x
input_tile_height = input_end_y - input_start_y
tile_idx = y * tiles_x + x + 1
input_tile = self.img[:, :, input_start_y_pad:input_end_y_pad, input_start_x_pad:input_end_x_pad]
# upscale tile
try:
with torch.no_grad():
output_tile = self.model(input_tile)
except RuntimeError as error:
print('Error', error)
print(f'\tTile {tile_idx}/{tiles_x * tiles_y}')
# output tile area on total image
output_start_x = input_start_x * self.scale
output_end_x = input_end_x * self.scale
output_start_y = input_start_y * self.scale
output_end_y = input_end_y * self.scale
# output tile area without padding
output_start_x_tile = (input_start_x - input_start_x_pad) * self.scale
output_end_x_tile = output_start_x_tile + input_tile_width * self.scale
output_start_y_tile = (input_start_y - input_start_y_pad) * self.scale
output_end_y_tile = output_start_y_tile + input_tile_height * self.scale
# put tile into output image
self.output[:, :, output_start_y:output_end_y,
output_start_x:output_end_x] = output_tile[:, :, output_start_y_tile:output_end_y_tile,
output_start_x_tile:output_end_x_tile]
def post_process(self):
# remove extra pad
if self.mod_scale is not None:
_, _, h, w = self.output.size()
self.output = self.output[:, :, 0:h - self.mod_pad_h * self.scale, 0:w - self.mod_pad_w * self.scale]
# remove prepad
if self.pre_pad != 0:
_, _, h, w = self.output.size()
self.output = self.output[:, :, 0:h - self.pre_pad * self.scale, 0:w - self.pre_pad * self.scale]
return self.output
@torch.no_grad()
def enhance(self, img, outscale=None, alpha_upsampler='realesrgan'):
h_input, w_input = img.shape[0:2]
# img: numpy
img = img.astype(np.float32)
if np.max(img) > 256: # 16-bit image
max_range = 65535
print('\tInput is a 16-bit image')
else:
max_range = 255
img = img / max_range
if len(img.shape) == 2: # gray image
img_mode = 'L'
img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB)
elif img.shape[2] == 4: # RGBA image with alpha channel
img_mode = 'RGBA'
alpha = img[:, :, 3]
img = img[:, :, 0:3]
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
if alpha_upsampler == 'realesrgan':
alpha = cv2.cvtColor(alpha, cv2.COLOR_GRAY2RGB)
else:
img_mode = 'RGB'
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# ------------------- process image (without the alpha channel) ------------------- #
self.pre_process(img)
if self.tile_size > 0:
self.tile_process()
else:
self.process()
output_img = self.post_process()
output_img = output_img.data.squeeze().float().cpu().clamp_(0, 1).numpy()
output_img = np.transpose(output_img[[2, 1, 0], :, :], (1, 2, 0))
if img_mode == 'L':
output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2GRAY)
# ------------------- process the alpha channel if necessary ------------------- #
if img_mode == 'RGBA':
if alpha_upsampler == 'realesrgan':
self.pre_process(alpha)
if self.tile_size > 0:
self.tile_process()
else:
self.process()
output_alpha = self.post_process()
output_alpha = output_alpha.data.squeeze().float().cpu().clamp_(0, 1).numpy()
output_alpha = np.transpose(output_alpha[[2, 1, 0], :, :], (1, 2, 0))
output_alpha = cv2.cvtColor(output_alpha, cv2.COLOR_BGR2GRAY)
else: # use the cv2 resize for alpha channel
h, w = alpha.shape[0:2]
output_alpha = cv2.resize(alpha, (w * self.scale, h * self.scale), interpolation=cv2.INTER_LINEAR)
# merge the alpha channel
output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2BGRA)
output_img[:, :, 3] = output_alpha
# ------------------------------ return ------------------------------ #
if max_range == 65535: # 16-bit image
output = (output_img * 65535.0).round().astype(np.uint16)
else:
output = (output_img * 255.0).round().astype(np.uint8)
if outscale is not None and outscale != float(self.scale):
output = cv2.resize(
output, (
int(w_input * outscale),
int(h_input * outscale),
), interpolation=cv2.INTER_LANCZOS4)
return output, img_mode
class PrefetchReader(threading.Thread):
"""Prefetch images.
Args:
img_list (list[str]): A image list of image paths to be read.
num_prefetch_queue (int): Number of prefetch queue.
"""
def __init__(self, img_list, num_prefetch_queue):
super().__init__()
self.que = queue.Queue(num_prefetch_queue)
self.img_list = img_list
def run(self):
for img_path in self.img_list:
img = cv2.imread(img_path, cv2.IMREAD_UNCHANGED)
self.que.put(img)
self.que.put(None)
def __next__(self):
next_item = self.que.get()
if next_item is None:
raise StopIteration
return next_item
def __iter__(self):
return self
class IOConsumer(threading.Thread):
def __init__(self, opt, que, qid):
super().__init__()
self._queue = que
self.qid = qid
self.opt = opt
def run(self):
while True:
msg = self._queue.get()
if isinstance(msg, str) and msg == 'quit':
break
output = msg['output']
save_path = msg['save_path']
cv2.imwrite(save_path, output)
print(f'IO worker {self.qid} is done.')

View File

@@ -0,0 +1,3 @@
# Weights
Put the downloaded weights to this folder.

View File

@@ -1,4 +1,9 @@
basicsr
cv2
basicsr>=1.3.3.11
facexlib>=0.2.0.3
gfpgan>=0.2.1
numpy
opencv-python
Pillow
torch>=1.7
torchvision
tqdm

View File

@@ -0,0 +1,135 @@
import argparse
import cv2
import numpy as np
import os
import sys
from basicsr.utils import scandir
from multiprocessing import Pool
from os import path as osp
from tqdm import tqdm
def main(args):
"""A multi-thread tool to crop large images to sub-images for faster IO.
opt (dict): Configuration dict. It contains:
n_thread (int): Thread number.
compression_level (int): CV_IMWRITE_PNG_COMPRESSION from 0 to 9. A higher value means a smaller size
and longer compression time. Use 0 for faster CPU decompression. Default: 3, same in cv2.
input_folder (str): Path to the input folder.
save_folder (str): Path to save folder.
crop_size (int): Crop size.
step (int): Step for overlapped sliding window.
thresh_size (int): Threshold size. Patches whose size is lower than thresh_size will be dropped.
Usage:
For each folder, run this script.
Typically, there are GT folder and LQ folder to be processed for DIV2K dataset.
After process, each sub_folder should have the same number of subimages.
Remember to modify opt configurations according to your settings.
"""
opt = {}
opt['n_thread'] = args.n_thread
opt['compression_level'] = args.compression_level
opt['input_folder'] = args.input
opt['save_folder'] = args.output
opt['crop_size'] = args.crop_size
opt['step'] = args.step
opt['thresh_size'] = args.thresh_size
extract_subimages(opt)
def extract_subimages(opt):
"""Crop images to subimages.
Args:
opt (dict): Configuration dict. It contains:
input_folder (str): Path to the input folder.
save_folder (str): Path to save folder.
n_thread (int): Thread number.
"""
input_folder = opt['input_folder']
save_folder = opt['save_folder']
if not osp.exists(save_folder):
os.makedirs(save_folder)
print(f'mkdir {save_folder} ...')
else:
print(f'Folder {save_folder} already exists. Exit.')
sys.exit(1)
# scan all images
img_list = list(scandir(input_folder, full_path=True))
pbar = tqdm(total=len(img_list), unit='image', desc='Extract')
pool = Pool(opt['n_thread'])
for path in img_list:
pool.apply_async(worker, args=(path, opt), callback=lambda arg: pbar.update(1))
pool.close()
pool.join()
pbar.close()
print('All processes done.')
def worker(path, opt):
"""Worker for each process.
Args:
path (str): Image path.
opt (dict): Configuration dict. It contains:
crop_size (int): Crop size.
step (int): Step for overlapped sliding window.
thresh_size (int): Threshold size. Patches whose size is lower than thresh_size will be dropped.
save_folder (str): Path to save folder.
compression_level (int): for cv2.IMWRITE_PNG_COMPRESSION.
Returns:
process_info (str): Process information displayed in progress bar.
"""
crop_size = opt['crop_size']
step = opt['step']
thresh_size = opt['thresh_size']
img_name, extension = osp.splitext(osp.basename(path))
# remove the x2, x3, x4 and x8 in the filename for DIV2K
img_name = img_name.replace('x2', '').replace('x3', '').replace('x4', '').replace('x8', '')
img = cv2.imread(path, cv2.IMREAD_UNCHANGED)
h, w = img.shape[0:2]
h_space = np.arange(0, h - crop_size + 1, step)
if h - (h_space[-1] + crop_size) > thresh_size:
h_space = np.append(h_space, h - crop_size)
w_space = np.arange(0, w - crop_size + 1, step)
if w - (w_space[-1] + crop_size) > thresh_size:
w_space = np.append(w_space, w - crop_size)
index = 0
for x in h_space:
for y in w_space:
index += 1
cropped_img = img[x:x + crop_size, y:y + crop_size, ...]
cropped_img = np.ascontiguousarray(cropped_img)
cv2.imwrite(
osp.join(opt['save_folder'], f'{img_name}_s{index:03d}{extension}'), cropped_img,
[cv2.IMWRITE_PNG_COMPRESSION, opt['compression_level']])
process_info = f'Processing {img_name} ...'
return process_info
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--input', type=str, default='datasets/DF2K/DF2K_HR', help='Input folder')
parser.add_argument('--output', type=str, default='datasets/DF2K/DF2K_HR_sub', help='Output folder')
parser.add_argument('--crop_size', type=int, default=480, help='Crop size')
parser.add_argument('--step', type=int, default=240, help='Step for overlapped sliding window')
parser.add_argument(
'--thresh_size',
type=int,
default=0,
help='Threshold size. Patches whose size is lower than thresh_size will be dropped.')
parser.add_argument('--n_thread', type=int, default=20, help='Thread number.')
parser.add_argument('--compression_level', type=int, default=3, help='Compression level')
args = parser.parse_args()
main(args)

View File

@@ -0,0 +1,58 @@
import argparse
import cv2
import glob
import os
def main(args):
txt_file = open(args.meta_info, 'w')
for folder, root in zip(args.input, args.root):
img_paths = sorted(glob.glob(os.path.join(folder, '*')))
for img_path in img_paths:
status = True
if args.check:
# read the image once for check, as some images may have errors
try:
img = cv2.imread(img_path)
except (IOError, OSError) as error:
print(f'Read {img_path} error: {error}')
status = False
if img is None:
status = False
print(f'Img is None: {img_path}')
if status:
# get the relative path
img_name = os.path.relpath(img_path, root)
print(img_name)
txt_file.write(f'{img_name}\n')
if __name__ == '__main__':
"""Generate meta info (txt file) for only Ground-Truth images.
It can also generate meta info from several folders into one txt file.
"""
parser = argparse.ArgumentParser()
parser.add_argument(
'--input',
nargs='+',
default=['datasets/DF2K/DF2K_HR', 'datasets/DF2K/DF2K_multiscale'],
help='Input folder, can be a list')
parser.add_argument(
'--root',
nargs='+',
default=['datasets/DF2K', 'datasets/DF2K'],
help='Folder root, should have the length as input folders')
parser.add_argument(
'--meta_info',
type=str,
default='datasets/DF2K/meta_info/meta_info_DF2Kmultiscale.txt',
help='txt path for meta info')
parser.add_argument('--check', action='store_true', help='Read image to check whether it is ok')
args = parser.parse_args()
assert len(args.input) == len(args.root), ('Input folder and folder root should have the same length, but got '
f'{len(args.input)} and {len(args.root)}.')
os.makedirs(os.path.dirname(args.meta_info), exist_ok=True)
main(args)

View File

@@ -0,0 +1,49 @@
import argparse
import glob
import os
def main(args):
txt_file = open(args.meta_info, 'w')
# sca images
img_paths_gt = sorted(glob.glob(os.path.join(args.input[0], '*')))
img_paths_lq = sorted(glob.glob(os.path.join(args.input[1], '*')))
assert len(img_paths_gt) == len(img_paths_lq), ('GT folder and LQ folder should have the same length, but got '
f'{len(img_paths_gt)} and {len(img_paths_lq)}.')
for img_path_gt, img_path_lq in zip(img_paths_gt, img_paths_lq):
# get the relative paths
img_name_gt = os.path.relpath(img_path_gt, args.root[0])
img_name_lq = os.path.relpath(img_path_lq, args.root[1])
print(f'{img_name_gt}, {img_name_lq}')
txt_file.write(f'{img_name_gt}, {img_name_lq}\n')
if __name__ == '__main__':
"""This script is used to generate meta info (txt file) for paired images.
"""
parser = argparse.ArgumentParser()
parser.add_argument(
'--input',
nargs='+',
default=['datasets/DF2K/DIV2K_train_HR_sub', 'datasets/DF2K/DIV2K_train_LR_bicubic_X4_sub'],
help='Input folder, should be [gt_folder, lq_folder]')
parser.add_argument('--root', nargs='+', default=[None, None], help='Folder root, will use the ')
parser.add_argument(
'--meta_info',
type=str,
default='datasets/DF2K/meta_info/meta_info_DIV2K_sub_pair.txt',
help='txt path for meta info')
args = parser.parse_args()
assert len(args.input) == 2, 'Input folder should have two elements: gt folder and lq folder'
assert len(args.root) == 2, 'Root path should have two elements: root for gt folder and lq folder'
os.makedirs(os.path.dirname(args.meta_info), exist_ok=True)
for i in range(2):
if args.input[i].endswith('/'):
args.input[i] = args.input[i][:-1]
if args.root[i] is None:
args.root[i] = os.path.dirname(args.input[i])
main(args)

View File

@@ -0,0 +1,48 @@
import argparse
import glob
import os
from PIL import Image
def main(args):
# For DF2K, we consider the following three scales,
# and the smallest image whose shortest edge is 400
scale_list = [0.75, 0.5, 1 / 3]
shortest_edge = 400
path_list = sorted(glob.glob(os.path.join(args.input, '*')))
for path in path_list:
print(path)
basename = os.path.splitext(os.path.basename(path))[0]
img = Image.open(path)
width, height = img.size
for idx, scale in enumerate(scale_list):
print(f'\t{scale:.2f}')
rlt = img.resize((int(width * scale), int(height * scale)), resample=Image.LANCZOS)
rlt.save(os.path.join(args.output, f'{basename}T{idx}.png'))
# save the smallest image which the shortest edge is 400
if width < height:
ratio = height / width
width = shortest_edge
height = int(width * ratio)
else:
ratio = width / height
height = shortest_edge
width = int(height * ratio)
rlt = img.resize((int(width), int(height)), resample=Image.LANCZOS)
rlt.save(os.path.join(args.output, f'{basename}T{idx+1}.png'))
if __name__ == '__main__':
"""Generate multi-scale versions for GT images with LANCZOS resampling.
It is now used for DF2K dataset (DIV2K + Flickr 2K)
"""
parser = argparse.ArgumentParser()
parser.add_argument('--input', type=str, default='datasets/DF2K/DF2K_HR', help='Input folder')
parser.add_argument('--output', type=str, default='datasets/DF2K/DF2K_multiscale', help='Output folder')
args = parser.parse_args()
os.makedirs(args.output, exist_ok=True)
main(args)

View File

@@ -1,17 +1,36 @@
import argparse
import torch
import torch.onnx
from basicsr.archs.rrdbnet_arch import RRDBNet
# An instance of your model
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32)
model.load_state_dict(torch.load('experiments/pretrained_models/RealESRGAN_x4plus.pth')['params_ema'])
# set the train mode to false since we will only run the forward pass.
model.train(False)
model.cpu().eval()
# An example input you would normally provide to your model's forward() method
x = torch.rand(1, 3, 64, 64)
def main(args):
# An instance of the model
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4)
if args.params:
keyname = 'params'
else:
keyname = 'params_ema'
model.load_state_dict(torch.load(args.input)[keyname])
# set the train mode to false since we will only run the forward pass.
model.train(False)
model.cpu().eval()
# Export the model
with torch.no_grad():
torch_out = torch.onnx._export(model, x, 'realesrgan-x4.onnx', opset_version=11, export_params=True)
# An example input
x = torch.rand(1, 3, 64, 64)
# Export the model
with torch.no_grad():
torch_out = torch.onnx._export(model, x, args.output, opset_version=11, export_params=True)
print(torch_out.shape)
if __name__ == '__main__':
"""Convert pytorch model to onnx models"""
parser = argparse.ArgumentParser()
parser.add_argument(
'--input', type=str, default='experiments/pretrained_models/RealESRGAN_x4plus.pth', help='Input model path')
parser.add_argument('--output', type=str, default='realesrgan-x4.onnx', help='Output onnx path')
parser.add_argument('--params', action='store_false', help='Use params instead of params_ema')
args = parser.parse_args()
main(args)

View File

@@ -16,7 +16,18 @@ split_before_expression_after_opening_paren = true
line_length = 120
multi_line_output = 0
known_standard_library = pkg_resources,setuptools
known_first_party = basicsr # modify it!
known_third_party = basicsr,cv2,numpy,torch
known_first_party = realesrgan
known_third_party = PIL,basicsr,cv2,numpy,pytest,torch,torchvision,tqdm,yaml
no_lines_before = STDLIB,LOCALFOLDER
default_section = THIRDPARTY
[codespell]
skip = .git,./docs/build
count =
quiet-level = 3
[aliases]
test=pytest
[tool:pytest]
addopts=tests/

107
setup.py Normal file
View File

@@ -0,0 +1,107 @@
#!/usr/bin/env python
from setuptools import find_packages, setup
import os
import subprocess
import time
version_file = 'realesrgan/version.py'
def readme():
with open('README.md', encoding='utf-8') as f:
content = f.read()
return content
def get_git_hash():
def _minimal_ext_cmd(cmd):
# construct minimal environment
env = {}
for k in ['SYSTEMROOT', 'PATH', 'HOME']:
v = os.environ.get(k)
if v is not None:
env[k] = v
# LANGUAGE is used on win32
env['LANGUAGE'] = 'C'
env['LANG'] = 'C'
env['LC_ALL'] = 'C'
out = subprocess.Popen(cmd, stdout=subprocess.PIPE, env=env).communicate()[0]
return out
try:
out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD'])
sha = out.strip().decode('ascii')
except OSError:
sha = 'unknown'
return sha
def get_hash():
if os.path.exists('.git'):
sha = get_git_hash()[:7]
else:
sha = 'unknown'
return sha
def write_version_py():
content = """# GENERATED VERSION FILE
# TIME: {}
__version__ = '{}'
__gitsha__ = '{}'
version_info = ({})
"""
sha = get_hash()
with open('VERSION', 'r') as f:
SHORT_VERSION = f.read().strip()
VERSION_INFO = ', '.join([x if x.isdigit() else f'"{x}"' for x in SHORT_VERSION.split('.')])
version_file_str = content.format(time.asctime(), SHORT_VERSION, sha, VERSION_INFO)
with open(version_file, 'w') as f:
f.write(version_file_str)
def get_version():
with open(version_file, 'r') as f:
exec(compile(f.read(), version_file, 'exec'))
return locals()['__version__']
def get_requirements(filename='requirements.txt'):
here = os.path.dirname(os.path.realpath(__file__))
with open(os.path.join(here, filename), 'r') as f:
requires = [line.replace('\n', '') for line in f.readlines()]
return requires
if __name__ == '__main__':
write_version_py()
setup(
name='realesrgan',
version=get_version(),
description='Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration',
long_description=readme(),
long_description_content_type='text/markdown',
author='Xintao Wang',
author_email='xintao.wang@outlook.com',
keywords='computer vision, pytorch, image restoration, super-resolution, esrgan, real-esrgan',
url='https://github.com/xinntao/Real-ESRGAN',
include_package_data=True,
packages=find_packages(exclude=('options', 'datasets', 'experiments', 'results', 'tb_logger', 'wandb')),
classifiers=[
'Development Status :: 4 - Beta',
'License :: OSI Approved :: Apache Software License',
'Operating System :: OS Independent',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
],
license='BSD-3-Clause License',
setup_requires=['cython', 'numpy'],
install_requires=get_requirements(),
zip_safe=False)

BIN
tests/data/gt.lmdb/data.mdb Normal file

Binary file not shown.

BIN
tests/data/gt.lmdb/lock.mdb Normal file

Binary file not shown.

View File

@@ -0,0 +1,2 @@
baboon.png (480,500,3) 1
comic.png (360,240,3) 1

BIN
tests/data/gt/baboon.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 532 KiB

BIN
tests/data/gt/comic.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 195 KiB

BIN
tests/data/lq.lmdb/data.mdb Normal file

Binary file not shown.

BIN
tests/data/lq.lmdb/lock.mdb Normal file

Binary file not shown.

View File

@@ -0,0 +1,2 @@
baboon.png (120,125,3) 1
comic.png (80,60,3) 1

BIN
tests/data/lq/baboon.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

BIN
tests/data/lq/comic.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

View File

@@ -0,0 +1,2 @@
baboon.png
comic.png

View File

@@ -0,0 +1,2 @@
gt/baboon.png, lq/baboon.png
gt/comic.png, lq/comic.png

View File

@@ -0,0 +1,28 @@
name: Demo
type: RealESRGANDataset
dataroot_gt: tests/data/gt
meta_info: tests/data/meta_info_gt.txt
io_backend:
type: disk
blur_kernel_size: 21
kernel_list: ['iso', 'aniso', 'generalized_iso', 'generalized_aniso', 'plateau_iso', 'plateau_aniso']
kernel_prob: [0.45, 0.25, 0.12, 0.03, 0.12, 0.03]
sinc_prob: 1
blur_sigma: [0.2, 3]
betag_range: [0.5, 4]
betap_range: [1, 2]
blur_kernel_size2: 21
kernel_list2: ['iso', 'aniso', 'generalized_iso', 'generalized_aniso', 'plateau_iso', 'plateau_aniso']
kernel_prob2: [0.45, 0.25, 0.12, 0.03, 0.12, 0.03]
sinc_prob2: 1
blur_sigma2: [0.2, 1.5]
betag_range2: [0.5, 4]
betap_range2: [1, 2]
final_sinc_prob: 1
gt_size: 128
use_hflip: True
use_rot: False

View File

@@ -0,0 +1,115 @@
scale: 4
num_gpu: 1
manual_seed: 0
is_train: True
dist: False
# ----------------- options for synthesizing training data ----------------- #
# USM the ground-truth
l1_gt_usm: True
percep_gt_usm: True
gan_gt_usm: False
# the first degradation process
resize_prob: [0.2, 0.7, 0.1] # up, down, keep
resize_range: [0.15, 1.5]
gaussian_noise_prob: 1
noise_range: [1, 30]
poisson_scale_range: [0.05, 3]
gray_noise_prob: 1
jpeg_range: [30, 95]
# the second degradation process
second_blur_prob: 1
resize_prob2: [0.3, 0.4, 0.3] # up, down, keep
resize_range2: [0.3, 1.2]
gaussian_noise_prob2: 1
noise_range2: [1, 25]
poisson_scale_range2: [0.05, 2.5]
gray_noise_prob2: 1
jpeg_range2: [30, 95]
gt_size: 32
queue_size: 1
# network structures
network_g:
type: RRDBNet
num_in_ch: 3
num_out_ch: 3
num_feat: 4
num_block: 1
num_grow_ch: 2
network_d:
type: UNetDiscriminatorSN
num_in_ch: 3
num_feat: 2
skip_connection: True
# path
path:
pretrain_network_g: ~
param_key_g: params_ema
strict_load_g: true
resume_state: ~
# training settings
train:
ema_decay: 0.999
optim_g:
type: Adam
lr: !!float 1e-4
weight_decay: 0
betas: [0.9, 0.99]
optim_d:
type: Adam
lr: !!float 1e-4
weight_decay: 0
betas: [0.9, 0.99]
scheduler:
type: MultiStepLR
milestones: [400000]
gamma: 0.5
total_iter: 400000
warmup_iter: -1 # no warm up
# losses
pixel_opt:
type: L1Loss
loss_weight: 1.0
reduction: mean
# perceptual loss (content and style losses)
perceptual_opt:
type: PerceptualLoss
layer_weights:
# before relu
'conv1_2': 0.1
'conv2_2': 0.1
'conv3_4': 1
'conv4_4': 1
'conv5_4': 1
vgg_type: vgg19
use_input_norm: true
perceptual_weight: !!float 1.0
style_weight: 0
range_norm: false
criterion: l1
# gan loss
gan_opt:
type: GANLoss
gan_type: vanilla
real_label_val: 1.0
fake_label_val: 0.0
loss_weight: !!float 1e-1
net_d_iters: 1
net_d_init_iters: 0
# validation settings
val:
val_freq: !!float 5e3
save_img: False

View File

@@ -0,0 +1,13 @@
name: Demo
type: RealESRGANPairedDataset
scale: 4
dataroot_gt: tests/data
dataroot_lq: tests/data
meta_info: tests/data/meta_info_pair.txt
io_backend:
type: disk
phase: train
gt_size: 128
use_hflip: True
use_rot: False

View File

@@ -0,0 +1,75 @@
scale: 4
num_gpu: 1
manual_seed: 0
is_train: True
dist: False
# ----------------- options for synthesizing training data ----------------- #
gt_usm: True # USM the ground-truth
# the first degradation process
resize_prob: [0.2, 0.7, 0.1] # up, down, keep
resize_range: [0.15, 1.5]
gaussian_noise_prob: 1
noise_range: [1, 30]
poisson_scale_range: [0.05, 3]
gray_noise_prob: 1
jpeg_range: [30, 95]
# the second degradation process
second_blur_prob: 1
resize_prob2: [0.3, 0.4, 0.3] # up, down, keep
resize_range2: [0.3, 1.2]
gaussian_noise_prob2: 1
noise_range2: [1, 25]
poisson_scale_range2: [0.05, 2.5]
gray_noise_prob2: 1
jpeg_range2: [30, 95]
gt_size: 32
queue_size: 1
# network structures
network_g:
type: RRDBNet
num_in_ch: 3
num_out_ch: 3
num_feat: 4
num_block: 1
num_grow_ch: 2
# path
path:
pretrain_network_g: ~
param_key_g: params_ema
strict_load_g: true
resume_state: ~
# training settings
train:
ema_decay: 0.999
optim_g:
type: Adam
lr: !!float 2e-4
weight_decay: 0
betas: [0.9, 0.99]
scheduler:
type: MultiStepLR
milestones: [1000000]
gamma: 0.5
total_iter: 1000000
warmup_iter: -1 # no warm up
# losses
pixel_opt:
type: L1Loss
loss_weight: 1.0
reduction: mean
# validation settings
val:
val_freq: !!float 5e3
save_img: False

151
tests/test_dataset.py Normal file
View File

@@ -0,0 +1,151 @@
import pytest
import yaml
from realesrgan.data.realesrgan_dataset import RealESRGANDataset
from realesrgan.data.realesrgan_paired_dataset import RealESRGANPairedDataset
def test_realesrgan_dataset():
with open('tests/data/test_realesrgan_dataset.yml', mode='r') as f:
opt = yaml.load(f, Loader=yaml.FullLoader)
dataset = RealESRGANDataset(opt)
assert dataset.io_backend_opt['type'] == 'disk' # io backend
assert len(dataset) == 2 # whether to read correct meta info
assert dataset.kernel_list == [
'iso', 'aniso', 'generalized_iso', 'generalized_aniso', 'plateau_iso', 'plateau_aniso'
] # correct initialization the degradation configurations
assert dataset.betag_range2 == [0.5, 4]
# test __getitem__
result = dataset.__getitem__(0)
# check returned keys
expected_keys = ['gt', 'kernel1', 'kernel2', 'sinc_kernel', 'gt_path']
assert set(expected_keys).issubset(set(result.keys()))
# check shape and contents
assert result['gt'].shape == (3, 400, 400)
assert result['kernel1'].shape == (21, 21)
assert result['kernel2'].shape == (21, 21)
assert result['sinc_kernel'].shape == (21, 21)
assert result['gt_path'] == 'tests/data/gt/baboon.png'
# ------------------ test lmdb backend -------------------- #
opt['dataroot_gt'] = 'tests/data/gt.lmdb'
opt['io_backend']['type'] = 'lmdb'
dataset = RealESRGANDataset(opt)
assert dataset.io_backend_opt['type'] == 'lmdb' # io backend
assert len(dataset.paths) == 2 # whether to read correct meta info
assert dataset.kernel_list == [
'iso', 'aniso', 'generalized_iso', 'generalized_aniso', 'plateau_iso', 'plateau_aniso'
] # correct initialization the degradation configurations
assert dataset.betag_range2 == [0.5, 4]
# test __getitem__
result = dataset.__getitem__(1)
# check returned keys
expected_keys = ['gt', 'kernel1', 'kernel2', 'sinc_kernel', 'gt_path']
assert set(expected_keys).issubset(set(result.keys()))
# check shape and contents
assert result['gt'].shape == (3, 400, 400)
assert result['kernel1'].shape == (21, 21)
assert result['kernel2'].shape == (21, 21)
assert result['sinc_kernel'].shape == (21, 21)
assert result['gt_path'] == 'comic'
# ------------------ test with sinc_prob = 0 -------------------- #
opt['dataroot_gt'] = 'tests/data/gt.lmdb'
opt['io_backend']['type'] = 'lmdb'
opt['sinc_prob'] = 0
opt['sinc_prob2'] = 0
opt['final_sinc_prob'] = 0
dataset = RealESRGANDataset(opt)
result = dataset.__getitem__(0)
# check returned keys
expected_keys = ['gt', 'kernel1', 'kernel2', 'sinc_kernel', 'gt_path']
assert set(expected_keys).issubset(set(result.keys()))
# check shape and contents
assert result['gt'].shape == (3, 400, 400)
assert result['kernel1'].shape == (21, 21)
assert result['kernel2'].shape == (21, 21)
assert result['sinc_kernel'].shape == (21, 21)
assert result['gt_path'] == 'baboon'
# ------------------ lmdb backend should have paths ends with lmdb -------------------- #
with pytest.raises(ValueError):
opt['dataroot_gt'] = 'tests/data/gt'
opt['io_backend']['type'] = 'lmdb'
dataset = RealESRGANDataset(opt)
def test_realesrgan_paired_dataset():
with open('tests/data/test_realesrgan_paired_dataset.yml', mode='r') as f:
opt = yaml.load(f, Loader=yaml.FullLoader)
dataset = RealESRGANPairedDataset(opt)
assert dataset.io_backend_opt['type'] == 'disk' # io backend
assert len(dataset) == 2 # whether to read correct meta info
# test __getitem__
result = dataset.__getitem__(0)
# check returned keys
expected_keys = ['gt', 'lq', 'gt_path', 'lq_path']
assert set(expected_keys).issubset(set(result.keys()))
# check shape and contents
assert result['gt'].shape == (3, 128, 128)
assert result['lq'].shape == (3, 32, 32)
assert result['gt_path'] == 'tests/data/gt/baboon.png'
assert result['lq_path'] == 'tests/data/lq/baboon.png'
# ------------------ test lmdb backend -------------------- #
opt['dataroot_gt'] = 'tests/data/gt.lmdb'
opt['dataroot_lq'] = 'tests/data/lq.lmdb'
opt['io_backend']['type'] = 'lmdb'
dataset = RealESRGANPairedDataset(opt)
assert dataset.io_backend_opt['type'] == 'lmdb' # io backend
assert len(dataset) == 2 # whether to read correct meta info
# test __getitem__
result = dataset.__getitem__(1)
# check returned keys
expected_keys = ['gt', 'lq', 'gt_path', 'lq_path']
assert set(expected_keys).issubset(set(result.keys()))
# check shape and contents
assert result['gt'].shape == (3, 128, 128)
assert result['lq'].shape == (3, 32, 32)
assert result['gt_path'] == 'comic'
assert result['lq_path'] == 'comic'
# ------------------ test paired_paths_from_folder -------------------- #
opt['dataroot_gt'] = 'tests/data/gt'
opt['dataroot_lq'] = 'tests/data/lq'
opt['io_backend'] = dict(type='disk')
opt['meta_info'] = None
dataset = RealESRGANPairedDataset(opt)
assert dataset.io_backend_opt['type'] == 'disk' # io backend
assert len(dataset) == 2 # whether to read correct meta info
# test __getitem__
result = dataset.__getitem__(0)
# check returned keys
expected_keys = ['gt', 'lq', 'gt_path', 'lq_path']
assert set(expected_keys).issubset(set(result.keys()))
# check shape and contents
assert result['gt'].shape == (3, 128, 128)
assert result['lq'].shape == (3, 32, 32)
# ------------------ test normalization -------------------- #
dataset.mean = [0.5, 0.5, 0.5]
dataset.std = [0.5, 0.5, 0.5]
# test __getitem__
result = dataset.__getitem__(0)
# check returned keys
expected_keys = ['gt', 'lq', 'gt_path', 'lq_path']
assert set(expected_keys).issubset(set(result.keys()))
# check shape and contents
assert result['gt'].shape == (3, 128, 128)
assert result['lq'].shape == (3, 32, 32)

View File

@@ -0,0 +1,19 @@
import torch
from realesrgan.archs.discriminator_arch import UNetDiscriminatorSN
def test_unetdiscriminatorsn():
"""Test arch: UNetDiscriminatorSN."""
# model init and forward (cpu)
net = UNetDiscriminatorSN(num_in_ch=3, num_feat=4, skip_connection=True)
img = torch.rand((1, 3, 32, 32), dtype=torch.float32)
output = net(img)
assert output.shape == (1, 1, 32, 32)
# model init and forward (gpu)
if torch.cuda.is_available():
net.cuda()
output = net(img.cuda())
assert output.shape == (1, 1, 32, 32)

126
tests/test_model.py Normal file
View File

@@ -0,0 +1,126 @@
import torch
import yaml
from basicsr.archs.rrdbnet_arch import RRDBNet
from basicsr.data.paired_image_dataset import PairedImageDataset
from basicsr.losses.losses import GANLoss, L1Loss, PerceptualLoss
from realesrgan.archs.discriminator_arch import UNetDiscriminatorSN
from realesrgan.models.realesrgan_model import RealESRGANModel
from realesrgan.models.realesrnet_model import RealESRNetModel
def test_realesrnet_model():
with open('tests/data/test_realesrnet_model.yml', mode='r') as f:
opt = yaml.load(f, Loader=yaml.FullLoader)
# build model
model = RealESRNetModel(opt)
# test attributes
assert model.__class__.__name__ == 'RealESRNetModel'
assert isinstance(model.net_g, RRDBNet)
assert isinstance(model.cri_pix, L1Loss)
assert isinstance(model.optimizers[0], torch.optim.Adam)
# prepare data
gt = torch.rand((1, 3, 32, 32), dtype=torch.float32)
kernel1 = torch.rand((1, 5, 5), dtype=torch.float32)
kernel2 = torch.rand((1, 5, 5), dtype=torch.float32)
sinc_kernel = torch.rand((1, 5, 5), dtype=torch.float32)
data = dict(gt=gt, kernel1=kernel1, kernel2=kernel2, sinc_kernel=sinc_kernel)
model.feed_data(data)
# check dequeue
model.feed_data(data)
# check data shape
assert model.lq.shape == (1, 3, 8, 8)
assert model.gt.shape == (1, 3, 32, 32)
# change probability to test if-else
model.opt['gaussian_noise_prob'] = 0
model.opt['gray_noise_prob'] = 0
model.opt['second_blur_prob'] = 0
model.opt['gaussian_noise_prob2'] = 0
model.opt['gray_noise_prob2'] = 0
model.feed_data(data)
# check data shape
assert model.lq.shape == (1, 3, 8, 8)
assert model.gt.shape == (1, 3, 32, 32)
# ----------------- test nondist_validation -------------------- #
# construct dataloader
dataset_opt = dict(
name='Demo',
dataroot_gt='tests/data/gt',
dataroot_lq='tests/data/lq',
io_backend=dict(type='disk'),
scale=4,
phase='val')
dataset = PairedImageDataset(dataset_opt)
dataloader = torch.utils.data.DataLoader(dataset=dataset, batch_size=1, shuffle=False, num_workers=0)
assert model.is_train is True
model.nondist_validation(dataloader, 1, None, False)
assert model.is_train is True
def test_realesrgan_model():
with open('tests/data/test_realesrgan_model.yml', mode='r') as f:
opt = yaml.load(f, Loader=yaml.FullLoader)
# build model
model = RealESRGANModel(opt)
# test attributes
assert model.__class__.__name__ == 'RealESRGANModel'
assert isinstance(model.net_g, RRDBNet) # generator
assert isinstance(model.net_d, UNetDiscriminatorSN) # discriminator
assert isinstance(model.cri_pix, L1Loss)
assert isinstance(model.cri_perceptual, PerceptualLoss)
assert isinstance(model.cri_gan, GANLoss)
assert isinstance(model.optimizers[0], torch.optim.Adam)
assert isinstance(model.optimizers[1], torch.optim.Adam)
# prepare data
gt = torch.rand((1, 3, 32, 32), dtype=torch.float32)
kernel1 = torch.rand((1, 5, 5), dtype=torch.float32)
kernel2 = torch.rand((1, 5, 5), dtype=torch.float32)
sinc_kernel = torch.rand((1, 5, 5), dtype=torch.float32)
data = dict(gt=gt, kernel1=kernel1, kernel2=kernel2, sinc_kernel=sinc_kernel)
model.feed_data(data)
# check dequeue
model.feed_data(data)
# check data shape
assert model.lq.shape == (1, 3, 8, 8)
assert model.gt.shape == (1, 3, 32, 32)
# change probability to test if-else
model.opt['gaussian_noise_prob'] = 0
model.opt['gray_noise_prob'] = 0
model.opt['second_blur_prob'] = 0
model.opt['gaussian_noise_prob2'] = 0
model.opt['gray_noise_prob2'] = 0
model.feed_data(data)
# check data shape
assert model.lq.shape == (1, 3, 8, 8)
assert model.gt.shape == (1, 3, 32, 32)
# ----------------- test nondist_validation -------------------- #
# construct dataloader
dataset_opt = dict(
name='Demo',
dataroot_gt='tests/data/gt',
dataroot_lq='tests/data/lq',
io_backend=dict(type='disk'),
scale=4,
phase='val')
dataset = PairedImageDataset(dataset_opt)
dataloader = torch.utils.data.DataLoader(dataset=dataset, batch_size=1, shuffle=False, num_workers=0)
assert model.is_train is True
model.nondist_validation(dataloader, 1, None, False)
assert model.is_train is True
# ----------------- test optimize_parameters -------------------- #
model.feed_data(data)
model.optimize_parameters(1)
assert model.output.shape == (1, 3, 32, 32)
assert isinstance(model.log_dict, dict)
# check returned keys
expected_keys = ['l_g_pix', 'l_g_percep', 'l_g_gan', 'l_d_real', 'out_d_real', 'l_d_fake', 'out_d_fake']
assert set(expected_keys).issubset(set(model.log_dict.keys()))

87
tests/test_utils.py Normal file
View File

@@ -0,0 +1,87 @@
import numpy as np
from basicsr.archs.rrdbnet_arch import RRDBNet
from realesrgan.utils import RealESRGANer
def test_realesrganer():
# initialize with default model
restorer = RealESRGANer(
scale=4,
model_path='experiments/pretrained_models/RealESRGAN_x4plus.pth',
model=None,
tile=10,
tile_pad=10,
pre_pad=2,
half=False)
assert isinstance(restorer.model, RRDBNet)
assert restorer.half is False
# initialize with user-defined model
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4)
restorer = RealESRGANer(
scale=4,
model_path='experiments/pretrained_models/RealESRGAN_x4plus_anime_6B.pth',
model=model,
tile=10,
tile_pad=10,
pre_pad=2,
half=True)
# test attribute
assert isinstance(restorer.model, RRDBNet)
assert restorer.half is True
# ------------------ test pre_process ---------------- #
img = np.random.random((12, 12, 3)).astype(np.float32)
restorer.pre_process(img)
assert restorer.img.shape == (1, 3, 14, 14)
# with modcrop
restorer.scale = 1
restorer.pre_process(img)
assert restorer.img.shape == (1, 3, 16, 16)
# ------------------ test process ---------------- #
restorer.process()
assert restorer.output.shape == (1, 3, 64, 64)
# ------------------ test post_process ---------------- #
restorer.mod_scale = 4
output = restorer.post_process()
assert output.shape == (1, 3, 60, 60)
# ------------------ test tile_process ---------------- #
restorer.scale = 4
img = np.random.random((12, 12, 3)).astype(np.float32)
restorer.pre_process(img)
restorer.tile_process()
assert restorer.output.shape == (1, 3, 64, 64)
# ------------------ test enhance ---------------- #
img = np.random.random((12, 12, 3)).astype(np.float32)
result = restorer.enhance(img, outscale=2)
assert result[0].shape == (24, 24, 3)
assert result[1] == 'RGB'
# ------------------ test enhance with 16-bit image---------------- #
img = np.random.random((4, 4, 3)).astype(np.uint16) + 512
result = restorer.enhance(img, outscale=2)
assert result[0].shape == (8, 8, 3)
assert result[1] == 'RGB'
# ------------------ test enhance with gray image---------------- #
img = np.random.random((4, 4)).astype(np.float32)
result = restorer.enhance(img, outscale=2)
assert result[0].shape == (8, 8)
assert result[1] == 'L'
# ------------------ test enhance with RGBA---------------- #
img = np.random.random((4, 4, 4)).astype(np.float32)
result = restorer.enhance(img, outscale=2)
assert result[0].shape == (8, 8, 4)
assert result[1] == 'RGBA'
# ------------------ test enhance with RGBA, alpha_upsampler---------------- #
restorer.tile_size = 0
img = np.random.random((4, 4, 4)).astype(np.float32)
result = restorer.enhance(img, outscale=2, alpha_upsampler=None)
assert result[0].shape == (8, 8, 4)
assert result[1] == 'RGBA'

View File

@@ -1,10 +0,0 @@
import os.path as osp
from basicsr.train import train_pipeline
import archs # noqa: F401
import data # noqa: F401
import models # noqa: F401
if __name__ == '__main__':
root_path = osp.abspath(osp.join(__file__, osp.pardir))
train_pipeline(root_path)