...
 
Commits (14)
    https://gitcode.net/XianxinMao/yt-dlp/-/commit/c8bc203fbf3bb09914e53f0833eed622ab7edbb9 [docs] Misc improvements 2023-05-20T02:38:24+05:30 pukkandan pukkandan.ytdlp@gmail.com Closes #6814, closes #6940, closes #6733, closes #6923, closes #6566, closes #6726, closes #6728 https://gitcode.net/XianxinMao/yt-dlp/-/commit/f7f7a877bf8e87fd4eb0ad2494ad948ca7691114 [extractor/booyah] Remove extractor 2023-05-20T04:05:22+05:30 pukkandan pukkandan.ytdlp@gmail.com Site shut down. Closes #6425 https://gitcode.net/XianxinMao/yt-dlp/-/commit/1d7656184c6b8aa46b29149893894b3c24f1df00 [jsinterp] Handle `NaN` in bitwise operators 2023-05-20T04:07:17+05:30 pukkandan pukkandan.ytdlp@gmail.com Closes #6131 https://gitcode.net/XianxinMao/yt-dlp/-/commit/6f2287cb18cbfb27518f068d868fa9390fee78ad [cleanup] Misc 2023-05-20T04:23:41+05:30 pukkandan pukkandan.ytdlp@gmail.com Closes #7030, closes #6967 https://gitcode.net/XianxinMao/yt-dlp/-/commit/447afb9eaa65bc677e3245c83e53a8e69c174a3c [extractor/youtube] Support podcasts and releases tabs 2023-05-20T19:11:03+12:00 coletdjnz coletdjnz@protonmail.com Closes <a href="https://github.com/yt-dlp/yt-dlp/issues/6893" rel="nofollow noreferrer noopener" target="_blank">https://github.com/yt-dlp/yt-dlp/issues/6893</a> Authored by: coletdjnz https://gitcode.net/XianxinMao/yt-dlp/-/commit/d2e84d5eb01c66fc5304e8566348d65a7be24ed7 [update] Better error handling 2023-05-20T21:19:37+02:00 Simon Sawicki contact@grub4k.xyz Authored by: pukkandan https://gitcode.net/XianxinMao/yt-dlp/-/commit/665472a7de3880578c0b7b3f95c71570c056368e [update] Implement `--update-to` repo 2023-05-20T21:21:32+02:00 Simon Sawicki contact@grub4k.xyz Authored by: Grub4K, pukkandan https://gitcode.net/XianxinMao/yt-dlp/-/commit/44a79958f0b596ee71e1eb25f158610aada29d1b [build] Fix macOS target 2023-05-20T21:24:27+02:00 Simon Sawicki contact@grub4k.xyz Authored by: Grub4K https://gitcode.net/XianxinMao/yt-dlp/-/commit/c4efa0aefec8daef1de62fd1693f13edf3c8b03c [build] Various build workflow improvements 2023-05-20T14:27:45-05:00 bashonly bashonly@bashonly.com - Wait for build before publishing to PyPI - Do not run `meta_files` job if release is cancelled - Customizable channel in release workflow - Display badges above changelog Authored by: bashonly, Grub4K https://gitcode.net/XianxinMao/yt-dlp/-/commit/b73193c99aa23b135732408a5fcf655c68d731c6 [build] Implement build verification using `--update-to` 2023-05-20T14:27:53-05:00 bashonly bashonly@bashonly.com Authored by: bashonly, Grub4K https://gitcode.net/XianxinMao/yt-dlp/-/commit/23c39a4beadee382060bb47fdaa21316ca707d38 [devscripts] `make_changelog`: Various improvements 2023-05-20T21:30:02+02:00 Simon Sawicki contact@grub4k.xyz - Make single items collapse into one line - Don't hide "Important changes" in `&lt;details&gt;` - Move upstream merge into priority - Properly support comma separated prefixes Authored by: Grub4K https://gitcode.net/XianxinMao/yt-dlp/-/commit/69bec6730ec9d724bcedeab199d9d684d61423ba [cleanup, utils] Split into submodules (#7090) 2023-05-20T21:56:23+00:00 coletdjnz coletdjnz@protonmail.com Closes <a href="https://github.com/yt-dlp/yt-dlp/pull/2173" rel="nofollow noreferrer noopener" target="_blank">https://github.com/yt-dlp/yt-dlp/pull/2173</a> Authored by: pukkandan, coletdjnz Co-authored-by: <span data-trailer="Co-authored-by:"><a href="mailto:pukkandan.ytdlp@gmail.com" title="pukkandan.ytdlp@gmail.com"></a><a href="javascript:void(0)" class="avatar s16 avatar-inline identicon bg2" style="text-decoration: none">N</a><a href="mailto:pukkandan.ytdlp@gmail.com" title="pukkandan.ytdlp@gmail.com">pukkandan</a> &lt;<a href="mailto:pukkandan.ytdlp@gmail.com" title="pukkandan.ytdlp@gmail.com">pukkandan.ytdlp@gmail.com</a>&gt;</span> https://gitcode.net/XianxinMao/yt-dlp/-/commit/955c89584b66fcd0fcfab3e611f1edeb1ca63886 [core] Deprecate internal `Youtubedl-no-compression` header (#6876) 2023-05-20T22:55:09+00:00 coletdjnz coletdjnz@protonmail.com Authored by: coletdjnz https://gitcode.net/XianxinMao/yt-dlp/-/commit/69a40e4a7f6caa5662527ebd2f3c4e8aa02857a2 [extractor/youtube:music:search_url] Extract title (#7102) 2023-05-22T17:17:06+05:30 kangalio jannik.a.schaper@web.de Authored by: kangalio Closes #7095
name: Broken site
description: Report error in a supported site
name: Broken site support
description: Report issue with yt-dlp on a supported site
labels: [triage, site-bug]
body:
- type: checkboxes
......@@ -16,7 +16,7 @@ body:
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting that a **supported** site is broken
- label: I'm reporting that yt-dlp is broken on a **supported** site
required: true
- label: I've verified that I'm running yt-dlp version **2023.03.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true
......
name: Bug report
name: Core bug report
description: Report a bug unrelated to any particular site or extractor
labels: [triage, bug]
body:
......
name: Broken site
description: Report error in a supported site
name: Broken site support
description: Report issue with yt-dlp on a supported site
labels: [triage, site-bug]
body:
%(no_skip)s
......@@ -10,7 +10,7 @@ body:
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting that a **supported** site is broken
- label: I'm reporting that yt-dlp is broken on a **supported** site
required: true
- label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true
......
name: Bug report
name: Core bug report
description: Report a bug unrelated to any particular site or extractor
labels: [triage, bug]
body:
......
......@@ -40,4 +40,10 @@ ### What is the purpose of your *pull request*?
- [ ] Core bug fix/improvement
- [ ] New feature (It is strongly [recommended to open an issue first](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#adding-new-feature-or-making-overarching-changes))
<!-- Do NOT edit/remove anything below this! -->
</details><details><summary>Copilot Summary</summary>
copilot:all
</details>
......@@ -41,7 +41,7 @@ on:
required: true
type: string
channel:
description: Update channel (stable/nightly)
description: Update channel (stable/nightly/...)
required: true
default: stable
type: string
......@@ -127,6 +127,19 @@ jobs:
mv ./dist/yt-dlp_linux ./yt-dlp_linux
mv ./dist/yt-dlp_linux.zip ./yt-dlp_linux.zip
- name: Verify --update-to
if: vars.UPDATE_TO_VERIFICATION
run: |
binaries=("yt-dlp" "yt-dlp_linux")
for binary in "${binaries[@]}"; do
chmod +x ./${binary}
cp ./${binary} ./${binary}_downgraded
version="$(./${binary} --version)"
./${binary}_downgraded -v --update-to yt-dlp/yt-dlp@2023.03.04
downgraded_version="$(./${binary}_downgraded --version)"
[[ "$version" != "$downgraded_version" ]]
done
- name: Upload artifacts
uses: actions/upload-artifact@v3
with:
......@@ -176,6 +189,16 @@ jobs:
python3.8 devscripts/make_lazy_extractors.py
python3.8 pyinst.py
if ${{ vars.UPDATE_TO_VERIFICATION && 'true' || 'false' }}; then
arch="${{ (matrix.architecture == 'armv7' && 'armv7l') || matrix.architecture }}"
chmod +x ./dist/yt-dlp_linux_${arch}
cp ./dist/yt-dlp_linux_${arch} ./dist/yt-dlp_linux_${arch}_downgraded
version="$(./dist/yt-dlp_linux_${arch} --version)"
./dist/yt-dlp_linux_${arch}_downgraded -v --update-to yt-dlp/yt-dlp@2023.03.04
downgraded_version="$(./dist/yt-dlp_linux_${arch}_downgraded --version)"
[[ "$version" != "$downgraded_version" ]]
fi
- name: Upload artifacts
uses: actions/upload-artifact@v3
with:
......@@ -188,21 +211,33 @@ jobs:
steps:
- uses: actions/checkout@v3
# NB: In order to create a universal2 application, the version of python3 in /usr/bin has to be used
# NB: Building universal2 does not work with python from actions/setup-python
- name: Install Requirements
run: |
brew install coreutils
/usr/bin/python3 -m pip install -U --user pip Pyinstaller==5.8 -r requirements.txt
python3 -m pip install -U --user pip setuptools wheel
# We need to ignore wheels otherwise we break universal2 builds
python3 -m pip install -U --user --no-binary :all: Pyinstaller -r requirements.txt
- name: Prepare
run: |
/usr/bin/python3 devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
/usr/bin/python3 devscripts/make_lazy_extractors.py
python3 devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
python3 devscripts/make_lazy_extractors.py
- name: Build
run: |
/usr/bin/python3 pyinst.py --target-architecture universal2 --onedir
python3 pyinst.py --target-architecture universal2 --onedir
(cd ./dist/yt-dlp_macos && zip -r ../yt-dlp_macos.zip .)
/usr/bin/python3 pyinst.py --target-architecture universal2
python3 pyinst.py --target-architecture universal2
- name: Verify --update-to
if: vars.UPDATE_TO_VERIFICATION
run: |
chmod +x ./dist/yt-dlp_macos
cp ./dist/yt-dlp_macos ./dist/yt-dlp_macos_downgraded
version="$(./dist/yt-dlp_macos --version)"
./dist/yt-dlp_macos_downgraded -v --update-to yt-dlp/yt-dlp@2023.03.04
downgraded_version="$(./dist/yt-dlp_macos_downgraded --version)"
[[ "$version" != "$downgraded_version" ]]
- name: Upload artifacts
uses: actions/upload-artifact@v3
......@@ -232,7 +267,8 @@ jobs:
- name: Install Requirements
run: |
brew install coreutils
python3 -m pip install -U --user pip Pyinstaller -r requirements.txt
python3 -m pip install -U --user pip setuptools wheel
python3 -m pip install -U --user Pyinstaller -r requirements.txt
- name: Prepare
run: |
......@@ -243,6 +279,16 @@ jobs:
python3 pyinst.py
mv dist/yt-dlp_macos dist/yt-dlp_macos_legacy
- name: Verify --update-to
if: vars.UPDATE_TO_VERIFICATION
run: |
chmod +x ./dist/yt-dlp_macos_legacy
cp ./dist/yt-dlp_macos_legacy ./dist/yt-dlp_macos_legacy_downgraded
version="$(./dist/yt-dlp_macos_legacy --version)"
./dist/yt-dlp_macos_legacy_downgraded -v --update-to yt-dlp/yt-dlp@2023.03.04
downgraded_version="$(./dist/yt-dlp_macos_legacy_downgraded --version)"
[[ "$version" != "$downgraded_version" ]]
- name: Upload artifacts
uses: actions/upload-artifact@v3
with:
......@@ -275,6 +321,19 @@ jobs:
python pyinst.py --onedir
Compress-Archive -Path ./dist/yt-dlp/* -DestinationPath ./dist/yt-dlp_win.zip
- name: Verify --update-to
if: vars.UPDATE_TO_VERIFICATION
run: |
foreach ($name in @("yt-dlp","yt-dlp_min")) {
Copy-Item "./dist/${name}.exe" "./dist/${name}_downgraded.exe"
$version = & "./dist/${name}.exe" --version
& "./dist/${name}_downgraded.exe" -v --update-to yt-dlp/yt-dlp@2023.03.04
$downgraded_version = & "./dist/${name}_downgraded.exe" --version
if ($version -eq $downgraded_version) {
exit 1
}
}
- name: Upload artifacts
uses: actions/upload-artifact@v3
with:
......@@ -306,6 +365,19 @@ jobs:
run: |
python pyinst.py
- name: Verify --update-to
if: vars.UPDATE_TO_VERIFICATION
run: |
foreach ($name in @("yt-dlp_x86")) {
Copy-Item "./dist/${name}.exe" "./dist/${name}_downgraded.exe"
$version = & "./dist/${name}.exe" --version
& "./dist/${name}_downgraded.exe" -v --update-to yt-dlp/yt-dlp@2023.03.04
$downgraded_version = & "./dist/${name}_downgraded.exe" --version
if ($version -eq $downgraded_version) {
exit 1
}
}
- name: Upload artifacts
uses: actions/upload-artifact@v3
with:
......@@ -313,7 +385,7 @@ jobs:
dist/yt-dlp_x86.exe
meta_files:
if: inputs.meta_files && always()
if: inputs.meta_files && always() && !cancelled()
needs:
- unix
- linux_arm
......
......@@ -2,16 +2,20 @@ name: Publish
on:
workflow_call:
inputs:
nightly:
default: false
required: false
type: boolean
channel:
default: stable
required: true
type: string
version:
required: true
type: string
target_commitish:
required: true
type: string
prerelease:
default: false
required: true
type: boolean
secrets:
ARCHIVE_REPO_TOKEN:
required: false
......@@ -34,16 +38,27 @@ jobs:
- name: Generate release notes
run: |
printf '%s' \
'[![Installation](https://img.shields.io/badge/-Which%20file%20should%20I%20download%3F-white.svg?style=for-the-badge)]' \
'(https://github.com/yt-dlp/yt-dlp#installation "Installation instructions") ' \
'[![Documentation](https://img.shields.io/badge/-Docs-brightgreen.svg?style=for-the-badge&logo=GitBook&labelColor=555555)]' \
'(https://github.com/yt-dlp/yt-dlp/tree/2023.03.04#readme "Documentation") ' \
'[![Donate](https://img.shields.io/badge/_-Donate-red.svg?logo=githubsponsors&labelColor=555555&style=for-the-badge)]' \
'(https://github.com/yt-dlp/yt-dlp/blob/master/Collaborators.md#collaborators "Donate") ' \
'[![Discord](https://img.shields.io/discord/807245652072857610?color=blue&labelColor=555555&label=&logo=discord&style=for-the-badge)]' \
'(https://discord.gg/H5MNcFW63r "Discord") ' \
${{ inputs.channel != 'nightly' && '"[![Nightly](https://img.shields.io/badge/Get%20nightly%20builds-purple.svg?style=for-the-badge)]" \
"(https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/latest \"Nightly builds\")"' || '' }} \
> ./RELEASE_NOTES
printf '\n\n' >> ./RELEASE_NOTES
cat >> ./RELEASE_NOTES << EOF
#### A description of the various files are in the [README](https://github.com/yt-dlp/yt-dlp#release-files)
---
<details><summary><h3>Changelog</h3></summary>
$(python ./devscripts/make_changelog.py -vv)
</details>
$(python ./devscripts/make_changelog.py -vv --collapsible)
EOF
echo "**This is an automated nightly pre-release build**" >> ./PRERELEASE_NOTES
cat ./RELEASE_NOTES >> ./PRERELEASE_NOTES
echo "Generated from: https://github.com/${{ github.repository }}/commit/${{ inputs.target_commitish }}" >> ./ARCHIVE_NOTES
printf '%s\n\n' '**This is an automated nightly pre-release build**' >> ./NIGHTLY_NOTES
cat ./RELEASE_NOTES >> ./NIGHTLY_NOTES
printf '%s\n\n' 'Generated from: https://github.com/${{ github.repository }}/commit/${{ inputs.target_commitish }}' >> ./ARCHIVE_NOTES
cat ./RELEASE_NOTES >> ./ARCHIVE_NOTES
- name: Archive nightly release
......@@ -51,7 +66,7 @@ jobs:
GH_TOKEN: ${{ secrets.ARCHIVE_REPO_TOKEN }}
GH_REPO: ${{ vars.ARCHIVE_REPO }}
if: |
inputs.nightly && env.GH_TOKEN != '' && env.GH_REPO != ''
inputs.channel == 'nightly' && env.GH_TOKEN != '' && env.GH_REPO != ''
run: |
gh release create \
--notes-file ARCHIVE_NOTES \
......@@ -60,7 +75,7 @@ jobs:
artifact/*
- name: Prune old nightly release
if: inputs.nightly && !vars.ARCHIVE_REPO
if: inputs.channel == 'nightly' && !vars.ARCHIVE_REPO
env:
GH_TOKEN: ${{ github.token }}
run: |
......@@ -68,14 +83,15 @@ jobs:
git tag --delete "nightly" || true
sleep 5 # Enough time to cover deletion race condition
- name: Publish release${{ inputs.nightly && ' (nightly)' || '' }}
- name: Publish release${{ inputs.channel == 'nightly' && ' (nightly)' || '' }}
env:
GH_TOKEN: ${{ github.token }}
if: (inputs.nightly && !vars.ARCHIVE_REPO) || !inputs.nightly
if: (inputs.channel == 'nightly' && !vars.ARCHIVE_REPO) || inputs.channel != 'nightly'
run: |
gh release create \
--notes-file ${{ inputs.nightly && 'PRE' || '' }}RELEASE_NOTES \
--notes-file ${{ inputs.channel == 'nightly' && 'NIGHTLY_NOTES' || 'RELEASE_NOTES' }} \
--target ${{ inputs.target_commitish }} \
--title "yt-dlp ${{ inputs.nightly && 'nightly ' || '' }}${{ inputs.version }}" \
${{ inputs.nightly && '--prerelease "nightly"' || inputs.version }} \
--title "yt-dlp ${{ inputs.channel == 'nightly' && 'nightly ' || '' }}${{ inputs.version }}" \
${{ inputs.prerelease && '--prerelease' || '' }} \
${{ inputs.channel == 'nightly' && '"nightly"' || inputs.version }} \
artifact/*
......@@ -46,6 +46,7 @@ jobs:
permissions:
contents: write
with:
nightly: true
channel: nightly
prerelease: true
version: ${{ needs.prepare.outputs.version }}
target_commitish: ${{ github.sha }}
name: Release
on: workflow_dispatch
on:
workflow_dispatch:
inputs:
version:
description: Version tag (YYYY.MM.DD[.REV])
required: false
default: ''
type: string
channel:
description: Update channel (stable/nightly/...)
required: false
default: ''
type: string
prerelease:
description: Pre-release
default: false
type: boolean
permissions:
contents: read
......@@ -9,8 +26,9 @@ jobs:
contents: write
runs-on: ubuntu-latest
outputs:
channel: ${{ steps.set_channel.outputs.channel }}
version: ${{ steps.update_version.outputs.version }}
head_sha: ${{ steps.push_release.outputs.head_sha }}
head_sha: ${{ steps.get_target.outputs.head_sha }}
steps:
- uses: actions/checkout@v3
......@@ -21,10 +39,18 @@ jobs:
with:
python-version: "3.10"
- name: Set channel
id: set_channel
run: |
CHANNEL="${{ github.repository == 'yt-dlp/yt-dlp' && 'stable' || github.repository }}"
echo "channel=${{ inputs.channel || '$CHANNEL' }}" > "$GITHUB_OUTPUT"
- name: Update version
id: update_version
run: |
python devscripts/update-version.py ${{ vars.PUSH_VERSION_COMMIT == '' && '"$(date -u +"%H%M%S")"' || '' }} | \
REVISION="${{ vars.PUSH_VERSION_COMMIT == '' && '$(date -u +"%H%M%S")' || '' }}"
REVISION="${{ inputs.prerelease && '$(date -u +"%H%M%S")' || '$REVISION' }}"
python devscripts/update-version.py ${{ inputs.version || '$REVISION' }} | \
grep -Po "version=\d+\.\d+\.\d+(\.\d+)?" >> "$GITHUB_OUTPUT"
- name: Update documentation
......@@ -39,6 +65,7 @@ jobs:
- name: Push to release
id: push_release
if: ${{ !inputs.prerelease }}
run: |
git config --global user.name github-actions
git config --global user.email github-actions@example.com
......@@ -46,14 +73,30 @@ jobs:
git commit -m "Release ${{ steps.update_version.outputs.version }}" \
-m "Created by: ${{ github.event.sender.login }}" -m ":ci skip all :ci run dl"
git push origin --force ${{ github.event.ref }}:release
- name: Get target commitish
id: get_target
run: |
echo "head_sha=$(git rev-parse HEAD)" >> "$GITHUB_OUTPUT"
- name: Update master
if: vars.PUSH_VERSION_COMMIT != ''
if: vars.PUSH_VERSION_COMMIT != '' && !inputs.prerelease
run: git push origin ${{ github.event.ref }}
publish_pypi_homebrew:
build:
needs: prepare
uses: ./.github/workflows/build.yml
with:
version: ${{ needs.prepare.outputs.version }}
channel: ${{ needs.prepare.outputs.channel }}
permissions:
contents: read
packages: write # For package cache
secrets:
GPG_SIGNING_KEY: ${{ secrets.GPG_SIGNING_KEY }}
publish_pypi_homebrew:
needs: [prepare, build]
runs-on: ubuntu-latest
steps:
......@@ -77,7 +120,7 @@ jobs:
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
if: env.TWINE_PASSWORD != ''
if: env.TWINE_PASSWORD != '' && !inputs.prerelease
run: |
rm -rf dist/*
make pypi-files
......@@ -89,7 +132,7 @@ jobs:
env:
BREW_TOKEN: ${{ secrets.BREW_TOKEN }}
PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }}
if: env.BREW_TOKEN != '' && env.PYPI_TOKEN != ''
if: env.BREW_TOKEN != '' && env.PYPI_TOKEN != '' && !inputs.prerelease
uses: actions/checkout@v3
with:
repository: yt-dlp/homebrew-taps
......@@ -100,7 +143,7 @@ jobs:
env:
BREW_TOKEN: ${{ secrets.BREW_TOKEN }}
PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }}
if: env.BREW_TOKEN != '' && env.PYPI_TOKEN != ''
if: env.BREW_TOKEN != '' && env.PYPI_TOKEN != '' && !inputs.prerelease
run: |
python devscripts/update-formulae.py taps/Formula/yt-dlp.rb "${{ needs.prepare.outputs.version }}"
git -C taps/ config user.name github-actions
......@@ -108,22 +151,13 @@ jobs:
git -C taps/ commit -am 'yt-dlp: ${{ needs.prepare.outputs.version }}'
git -C taps/ push
build:
needs: prepare
uses: ./.github/workflows/build.yml
with:
version: ${{ needs.prepare.outputs.version }}
permissions:
contents: read
packages: write # For package cache
secrets:
GPG_SIGNING_KEY: ${{ secrets.GPG_SIGNING_KEY }}
publish:
needs: [prepare, build]
uses: ./.github/workflows/publish.yml
permissions:
contents: write
with:
channel: ${{ needs.prepare.outputs.channel }}
prerelease: ${{ inputs.prerelease }}
version: ${{ needs.prepare.outputs.version }}
target_commitish: ${{ needs.prepare.outputs.head_sha }}
......@@ -79,7 +79,7 @@ ### Are you using the latest version?
### Is the issue already documented?
Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or browse the [GitHub Issues](https://github.com/yt-dlp/yt-dlp/search?type=Issues) of this repository. If there is an issue, feel free to write something along the lines of "This affects me as well, with version 2021.01.01. Here is some more information on the issue: ...". While some issues may be old, a new post into them often spurs rapid activity.
Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or browse the [GitHub Issues](https://github.com/yt-dlp/yt-dlp/search?type=Issues) of this repository. If there is an issue, subcribe to it to be notified when there is any progress. Unless you have something useful to add to the converation, please refrain from commenting.
Additionally, it is also helpful to see if the issue has already been documented in the [youtube-dl issue tracker](https://github.com/ytdl-org/youtube-dl/issues). If similar issues have already been reported in youtube-dl (but not in our issue tracker), links to them can be included in your issue report here.
......@@ -246,7 +246,7 @@ ## yt-dlp coding conventions
This section introduces a guide lines for writing idiomatic, robust and future-proof extractor code.
Extractors are very fragile by nature since they depend on the layout of the source data provided by 3rd party media hosters out of your control and this layout tends to change. As an extractor implementer your task is not only to write code that will extract media links and metadata correctly but also to minimize dependency on the source's layout and even to make the code foresee potential future changes and be ready for that. This is important because it will allow the extractor not to break on minor layout changes thus keeping old yt-dlp versions working. Even though this breakage issue may be easily fixed by a new version of yt-dlp, this could take some time, during which the the extractor will remain broken.
Extractors are very fragile by nature since they depend on the layout of the source data provided by 3rd party media hosters out of your control and this layout tends to change. As an extractor implementer your task is not only to write code that will extract media links and metadata correctly but also to minimize dependency on the source's layout and even to make the code foresee potential future changes and be ready for that. This is important because it will allow the extractor not to break on minor layout changes thus keeping old yt-dlp versions working. Even though this breakage issue may be easily fixed by a new version of yt-dlp, this could take some time, during which the extractor will remain broken.
### Mandatory and optional metafields
......
......@@ -8,7 +8,7 @@ # Collaborators
## [pukkandan](https://github.com/pukkandan)
[![ko-fi](https://img.shields.io/badge/_-Ko--fi-red.svg?logo=kofi&labelColor=555555&style=for-the-badge)](https://ko-fi.com/pukkandan)
[![gh-sponsor](https://img.shields.io/badge/_-Github-red.svg?logo=github&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/pukkandan)
[![gh-sponsor](https://img.shields.io/badge/_-Github-white.svg?logo=github&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/pukkandan)
* Owner of the fork
......@@ -26,7 +26,7 @@ ## [shirt](https://github.com/shirt-dev)
## [coletdjnz](https://github.com/coletdjnz)
[![gh-sponsor](https://img.shields.io/badge/_-Github-red.svg?logo=github&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/coletdjnz)
[![gh-sponsor](https://img.shields.io/badge/_-Github-white.svg?logo=github&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/coletdjnz)
* Improved plugin architecture
* YouTube improvements including: age-gate bypass, private playlists, multiple-clients (to avoid throttling) and a lot of under-the-hood improvements
......@@ -44,7 +44,7 @@ ## [Ashish0804](https://github.com/Ashish0804) <sub><sup>[Inactive]</sup></sub>
* Improved/fixed support for HiDive, HotStar, Hungama, LBRY, LinkedInLearning, Mxplayer, SonyLiv, TV2, Vimeo, VLive etc
## [Lesmiscore](https://github.com/Lesmiscore) <sub><sup>(nao20010128nao)</sup></sub>
## [Lesmiscore](https://github.com/Lesmiscore)
**Bitcoin**: bc1qfd02r007cutfdjwjmyy9w23rjvtls6ncve7r3s
**Monacoin**: mona1q3tf7dzvshrhfe3md379xtvt2n22duhglv5dskr
......@@ -64,7 +64,7 @@ ## [bashonly](https://github.com/bashonly)
## [Grub4K](https://github.com/Grub4K)
[![ko-fi](https://img.shields.io/badge/_-Ko--fi-red.svg?logo=kofi&labelColor=555555&style=for-the-badge)](https://ko-fi.com/Grub4K) [![gh-sponsor](https://img.shields.io/badge/_-Github-red.svg?logo=github&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/Grub4K)
[![ko-fi](https://img.shields.io/badge/_-Ko--fi-red.svg?logo=kofi&labelColor=555555&style=for-the-badge)](https://ko-fi.com/Grub4K) [![gh-sponsor](https://img.shields.io/badge/_-Github-white.svg?logo=github&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/Grub4K)
* `--update-to`, automated release, nightly builds
* Rework internals like `traverse_obj`, various core refactors and bugs fixes
......
......@@ -74,7 +74,7 @@ offlinetest: codetest
$(PYTHON) -m pytest -k "not download"
# XXX: This is hard to maintain
CODE_FOLDERS = yt_dlp yt_dlp/downloader yt_dlp/extractor yt_dlp/postprocessor yt_dlp/compat yt_dlp/dependencies
CODE_FOLDERS = yt_dlp yt_dlp/downloader yt_dlp/extractor yt_dlp/postprocessor yt_dlp/compat yt_dlp/utils yt_dlp/dependencies
yt-dlp: yt_dlp/*.py yt_dlp/*/*.py
mkdir -p zip
for d in $(CODE_FOLDERS) ; do \
......
......@@ -85,7 +85,7 @@ # NEW FEATURES
* **Merged with animelover1984/youtube-dl**: You get most of the features and improvements from [animelover1984/youtube-dl](https://github.com/animelover1984/youtube-dl) including `--write-comments`, `BiliBiliSearch`, `BilibiliChannel`, Embedding thumbnail in mp4/ogg/opus, playlist infojson etc. Note that NicoNico livestreams are not available. See [#31](https://github.com/yt-dlp/yt-dlp/pull/31) for details.
* **YouTube improvements**:
* Supports Clips, Stories (`ytstories:<channel UCID>`), Search (including filters)**\***, YouTube Music Search, Channel-specific search, Search prefixes (`ytsearch:`, `ytsearchdate:`)**\***, Mixes, YouTube Music Albums/Channels ([except self-uploaded music](https://github.com/yt-dlp/yt-dlp/issues/723)), and Feeds (`:ytfav`, `:ytwatchlater`, `:ytsubs`, `:ythistory`, `:ytrec`, `:ytnotif`)
* Supports Clips, Stories (`ytstories:<channel UCID>`), Search (including filters)**\***, YouTube Music Search, Channel-specific search, Search prefixes (`ytsearch:`, `ytsearchdate:`)**\***, Mixes, and Feeds (`:ytfav`, `:ytwatchlater`, `:ytsubs`, `:ythistory`, `:ytrec`, `:ytnotif`)
* Fix for [n-sig based throttling](https://github.com/ytdl-org/youtube-dl/issues/29326) **\***
* Supports some (but not all) age-gated content without cookies
* Download livestreams from the start using `--live-from-start` (*experimental*)
......@@ -179,13 +179,13 @@ # INSTALLATION
[![All versions](https://img.shields.io/badge/-All_Versions-lightgrey.svg?style=for-the-badge)](https://github.com/yt-dlp/yt-dlp/releases)
<!-- MANPAGE: END EXCLUDED SECTION -->
You can install yt-dlp using [the binaries](#release-files), [PIP](https://pypi.org/project/yt-dlp) or one using a third-party package manager. See [the wiki](https://github.com/yt-dlp/yt-dlp/wiki/Installation) for detailed instructions
You can install yt-dlp using [the binaries](#release-files), [pip](https://pypi.org/project/yt-dlp) or one using a third-party package manager. See [the wiki](https://github.com/yt-dlp/yt-dlp/wiki/Installation) for detailed instructions
## UPDATE
You can use `yt-dlp -U` to update if you are using the [release binaries](#release-files)
If you [installed with PIP](https://github.com/yt-dlp/yt-dlp/wiki/Installation#with-pip), simply re-run the same command that was used to install the program
If you [installed with pip](https://github.com/yt-dlp/yt-dlp/wiki/Installation#with-pip), simply re-run the same command that was used to install the program
For other third-party package managers, see [the wiki](https://github.com/yt-dlp/yt-dlp/wiki/Installation#third-party-package-managers) or refer their documentation
......@@ -196,12 +196,15 @@ ## UPDATE
The `nightly` channel has releases built after each push to the master branch, and will have the most recent fixes and additions, but also have more risk of regressions. They are available in [their own repo](https://github.com/yt-dlp/yt-dlp-nightly-builds/releases).
When using `--update`/`-U`, a release binary will only update to its current channel.
This release channel can be changed by using the `--update-to` option. `--update-to` can also be used to upgrade or downgrade to specific tags from a channel.
`--update-to CHANNEL` can be used to switch to a different channel when a newer version is available. `--update-to [CHANNEL@]TAG` can also be used to upgrade or downgrade to specific tags from a channel.
You may also use `--update-to <repository>` (`<owner>/<repository>`) to update to a channel on a completely different repository. Be careful with what repository you are updating to though, there is no verification done for binaries from different repositories.
Example usage:
* `yt-dlp --update-to nightly` change to `nightly` channel and update to its latest release
* `yt-dlp --update-to stable@2023.02.17` upgrade/downgrade to release to `stable` channel tag `2023.02.17`
* `yt-dlp --update-to 2023.01.06` upgrade/downgrade to tag `2023.01.06` if it exists on the current channel
* `yt-dlp --update-to example/yt-dlp@2023.03.01` upgrade/downgrade to the release from the `example/yt-dlp` repository, tag `2023.03.01`
<!-- MANPAGE: BEGIN EXCLUDED SECTION -->
## RELEASE FILES
......@@ -360,10 +363,10 @@ ## General Options:
-U, --update Update this program to the latest version
--no-update Do not check for updates (default)
--update-to [CHANNEL]@[TAG] Upgrade/downgrade to a specific version.
CHANNEL and TAG defaults to "stable" and
"latest" respectively if omitted; See
"UPDATE" for details. Supported channels:
stable, nightly
CHANNEL can be a repository as well. CHANNEL
and TAG default to "stable" and "latest"
respectively if omitted; See "UPDATE" for
details. Supported channels: stable, nightly
-i, --ignore-errors Ignore download and postprocessing errors.
The download will be considered successful
even if the postprocessing fails
......@@ -409,7 +412,8 @@ ## General Options:
configuration files
--flat-playlist Do not extract the videos of a playlist,
only list them
--no-flat-playlist Extract the videos of a playlist
--no-flat-playlist Fully extract the videos of a playlist
(default)
--live-from-start Download livestreams from the start.
Currently only supported for YouTube
(Experimental)
......@@ -465,9 +469,9 @@ ## Geo-restriction:
downloading
--xff VALUE How to fake X-Forwarded-For HTTP header to
try bypassing geographic restriction. One of
"default" (Only when known to be useful),
"never", a two-letter ISO 3166-2 country
code, or an IP block in CIDR notation
"default" (only when known to be useful),
"never", an IP block in CIDR notation, or a
two-letter ISO 3166-2 country code
## Video Selection:
-I, --playlist-items ITEM_SPEC Comma separated playlist_index of the items
......@@ -514,7 +518,7 @@ ## Video Selection:
dogs" (caseless). Use "--match-filter -" to
interactively ask whether to download each
video
--no-match-filter Do not use any --match-filter (default)
--no-match-filters Do not use any --match-filter (default)
--break-match-filters FILTER Same as "--match-filters" but stops the
download process when a video is rejected
--no-break-match-filters Do not use any --break-match-filters (default)
......@@ -1709,7 +1713,7 @@ # MODIFYING METADATA
This option also has a few special uses:
* You can download an additional URL based on the metadata of the currently downloaded video. To do this, set the field `additional_urls` to the URL that you want to download. E.g. `--parse-metadata "description:(?P<additional_urls>https?://www\.vimeo\.com/\d+)` will download the first vimeo video found in the description
* You can download an additional URL based on the metadata of the currently downloaded video. To do this, set the field `additional_urls` to the URL that you want to download. E.g. `--parse-metadata "description:(?P<additional_urls>https?://www\.vimeo\.com/\d+)"` will download the first vimeo video found in the description
* You can use this to change the metadata that is embedded in the media file. To do this, set the value of the corresponding field with a `meta_` prefix. For example, any value you set to `meta_description` field will be added to the `description` field in the file - you can use this to set a different "description" and "synopsis". To modify the metadata of individual streams, use the `meta<n>_` prefix (e.g. `meta1_language`). Any value set to the `meta_` field will overwrite all default values.
......@@ -1883,7 +1887,7 @@ ## Installing Plugins
* **System Plugins**
* `/etc/yt-dlp/plugins/<package name>/yt_dlp_plugins/`
* `/etc/yt-dlp-plugins/<package name>/yt_dlp_plugins/`
2. **Executable location**: Plugin packages can similarly be installed in a `yt-dlp-plugins` directory under the executable location:
2. **Executable location**: Plugin packages can similarly be installed in a `yt-dlp-plugins` directory under the executable location (recommended for portable installations):
* Binary: where `<root-dir>/yt-dlp.exe`, `<root-dir>/yt-dlp-plugins/<package name>/yt_dlp_plugins/`
* Source: where `<root-dir>/yt_dlp/__main__.py`, `<root-dir>/yt-dlp-plugins/<package name>/yt_dlp_plugins/`
......@@ -2071,7 +2075,7 @@ #### Use a custom format selector
```python
import yt_dlp
URL = ['https://www.youtube.com/watch?v=BaW_jenozKc']
URLS = ['https://www.youtube.com/watch?v=BaW_jenozKc']
def format_selector(ctx):
""" Select the best video and the best audio that won't result in an mkv.
......
......@@ -26,7 +26,6 @@
class CommitGroup(enum.Enum):
UPSTREAM = None
PRIORITY = 'Important'
CORE = 'Core'
EXTRACTOR = 'Extractor'
......@@ -34,6 +33,11 @@ class CommitGroup(enum.Enum):
POSTPROCESSOR = 'Postprocessor'
MISC = 'Misc.'
@classmethod
@property
def ignorable_prefixes(cls):
return ('core', 'downloader', 'extractor', 'misc', 'postprocessor', 'upstream')
@classmethod
@lru_cache
def commit_lookup(cls):
......@@ -41,7 +45,6 @@ def commit_lookup(cls):
name: group
for group, names in {
cls.PRIORITY: {''},
cls.UPSTREAM: {'upstream'},
cls.CORE: {
'aes',
'cache',
......@@ -54,6 +57,7 @@ def commit_lookup(cls):
'outtmpl',
'plugins',
'update',
'upstream',
'utils',
},
cls.MISC: {
......@@ -111,22 +115,36 @@ def key(self):
return ((self.details or '').lower(), self.sub_details, self.message)
def unique(items):
return sorted({item.strip().lower(): item for item in items if item}.values())
class Changelog:
MISC_RE = re.compile(r'(?:^|\b)(?:lint(?:ing)?|misc|format(?:ting)?|fixes)(?:\b|$)', re.IGNORECASE)
ALWAYS_SHOWN = (CommitGroup.PRIORITY,)
def __init__(self, groups, repo):
def __init__(self, groups, repo, collapsible=False):
self._groups = groups
self._repo = repo
self._collapsible = collapsible
def __str__(self):
return '\n'.join(self._format_groups(self._groups)).replace('\t', ' ')
def _format_groups(self, groups):
first = True
for item in CommitGroup:
if self._collapsible and item not in self.ALWAYS_SHOWN and first:
first = False
yield '\n<details><summary><h3>Changelog</h3></summary>\n'
group = groups[item]
if group:
yield self.format_module(item.value, group)
if self._collapsible:
yield '\n</details>'
def format_module(self, name, group):
result = f'\n#### {name} changes\n' if name else '\n'
return result + '\n'.join(self._format_group(group))
......@@ -137,62 +155,52 @@ def _format_group(self, group):
for _, items in detail_groups:
items = list(items)
details = items[0].details
if not details:
indent = ''
else:
yield f'- {details}'
indent = '\t'
if details == 'cleanup':
items, cleanup_misc_items = self._filter_cleanup_misc_items(items)
items = self._prepare_cleanup_misc_items(items)
prefix = '-'
if details:
if len(items) == 1:
prefix = f'- **{details}**:'
else:
yield f'- **{details}**'
prefix = '\t-'
sub_detail_groups = itertools.groupby(items, lambda item: tuple(map(str.lower, item.sub_details)))
for sub_details, entries in sub_detail_groups:
if not sub_details:
for entry in entries:
yield f'{indent}- {self.format_single_change(entry)}'
yield f'{prefix} {self.format_single_change(entry)}'
continue
entries = list(entries)
prefix = f'{indent}- {", ".join(entries[0].sub_details)}'
sub_prefix = f'{prefix} {", ".join(entries[0].sub_details)}'
if len(entries) == 1:
yield f'{prefix}: {self.format_single_change(entries[0])}'
yield f'{sub_prefix}: {self.format_single_change(entries[0])}'
continue
yield prefix
yield sub_prefix
for entry in entries:
yield f'{indent}\t- {self.format_single_change(entry)}'
if details == 'cleanup' and cleanup_misc_items:
yield from self._format_cleanup_misc_sub_group(cleanup_misc_items)
yield f'\t{prefix} {self.format_single_change(entry)}'
def _filter_cleanup_misc_items(self, items):
def _prepare_cleanup_misc_items(self, items):
cleanup_misc_items = defaultdict(list)
non_misc_items = []
sorted_items = []
for item in items:
if self.MISC_RE.search(item.message):
cleanup_misc_items[tuple(item.commit.authors)].append(item)
else:
non_misc_items.append(item)
return non_misc_items, cleanup_misc_items
def _format_cleanup_misc_sub_group(self, group):
prefix = '\t- Miscellaneous'
if len(group) == 1:
yield f'{prefix}: {next(self._format_cleanup_misc_items(group))}'
return
sorted_items.append(item)
yield prefix
for message in self._format_cleanup_misc_items(group):
yield f'\t\t- {message}'
for commit_infos in cleanup_misc_items.values():
sorted_items.append(CommitInfo(
'cleanup', ('Miscellaneous',), ', '.join(
self._format_message_link(None, info.commit.hash)
for info in sorted(commit_infos, key=lambda item: item.commit.hash or '')),
[], Commit(None, '', commit_infos[0].commit.authors), []))
def _format_cleanup_misc_items(self, group):
for authors, infos in group.items():
message = ', '.join(
self._format_message_link(None, info.commit.hash)
for info in sorted(infos, key=lambda item: item.commit.hash or ''))
yield f'{message} by {self._format_authors(authors)}'
return sorted_items
def format_single_change(self, info):
message = self._format_message_link(info.message, info.commit.hash)
......@@ -236,12 +244,8 @@ class CommitRange:
AUTHOR_INDICATOR_RE = re.compile(r'Authored by:? ', re.IGNORECASE)
MESSAGE_RE = re.compile(r'''
(?:\[
(?P<prefix>[^\]\/:,]+)
(?:/(?P<details>[^\]:,]+))?
(?:[:,](?P<sub_details>[^\]]+))?
\]\ )?
(?:(?P<sub_details_alt>`?[^:`]+`?): )?
(?:\[(?P<prefix>[^\]]+)\]\ )?
(?:(?P<sub_details>`?[^:`]+`?): )?
(?P<message>.+?)
(?:\ \((?P<issues>\#\d+(?:,\ \#\d+)*)\))?
''', re.VERBOSE | re.DOTALL)
......@@ -340,60 +344,76 @@ def apply_overrides(self, overrides):
self._commits = {key: value for key, value in reversed(self._commits.items())}
def groups(self):
groups = defaultdict(list)
group_dict = defaultdict(list)
for commit in self:
upstream_re = self.UPSTREAM_MERGE_RE.match(commit.short)
upstream_re = self.UPSTREAM_MERGE_RE.search(commit.short)
if upstream_re:
commit.short = f'[upstream] Merge up to youtube-dl {upstream_re.group(1)}'
commit.short = f'[upstream] Merged with youtube-dl {upstream_re.group(1)}'
match = self.MESSAGE_RE.fullmatch(commit.short)
if not match:
logger.error(f'Error parsing short commit message: {commit.short!r}')
continue
prefix, details, sub_details, sub_details_alt, message, issues = match.groups()
group = None
if prefix:
if prefix == 'priority':
prefix, _, details = (details or '').partition('/')
logger.debug(f'Priority: {message!r}')
group = CommitGroup.PRIORITY
if not details and prefix:
if prefix not in ('core', 'downloader', 'extractor', 'misc', 'postprocessor', 'upstream'):
logger.debug(f'Replaced details with {prefix!r}')
details = prefix or None
if details == 'common':
details = None
if details:
details = details.strip()
prefix, sub_details_alt, message, issues = match.groups()
issues = [issue.strip()[1:] for issue in issues.split(',')] if issues else []
if prefix:
groups, details, sub_details = zip(*map(self.details_from_prefix, prefix.split(',')))
group = next(iter(filter(None, groups)), None)
details = ', '.join(unique(details))
sub_details = list(itertools.chain.from_iterable(sub_details))
else:
group = CommitGroup.CORE
details = None
sub_details = []
sub_details = f'{sub_details or ""},{sub_details_alt or ""}'.replace(':', ',')
sub_details = tuple(filter(None, map(str.strip, sub_details.split(','))))
issues = [issue.strip()[1:] for issue in issues.split(',')] if issues else []
if sub_details_alt:
sub_details.append(sub_details_alt)
sub_details = tuple(unique(sub_details))
if not group:
group = CommitGroup.get(prefix.lower())
if not group:
if self.EXTRACTOR_INDICATOR_RE.search(commit.short):
group = CommitGroup.EXTRACTOR
else:
group = CommitGroup.POSTPROCESSOR
logger.warning(f'Failed to map {commit.short!r}, selected {group.name}')
if self.EXTRACTOR_INDICATOR_RE.search(commit.short):
group = CommitGroup.EXTRACTOR
else:
group = CommitGroup.POSTPROCESSOR
logger.warning(f'Failed to map {commit.short!r}, selected {group.name.lower()}')
commit_info = CommitInfo(
details, sub_details, message.strip(),
issues, commit, self._fixes[commit.hash])
logger.debug(f'Resolved {commit.short!r} to {commit_info!r}')
groups[group].append(commit_info)
group_dict[group].append(commit_info)
return group_dict
@staticmethod
def details_from_prefix(prefix):
if not prefix:
return CommitGroup.CORE, None, ()
return groups
prefix, _, details = prefix.partition('/')
prefix = prefix.strip().lower()
details = details.strip()
group = CommitGroup.get(prefix)
if group is CommitGroup.PRIORITY:
prefix, _, details = details.partition('/')
if not details and prefix and prefix not in CommitGroup.ignorable_prefixes:
logger.debug(f'Replaced details with {prefix!r}')
details = prefix or None
if details == 'common':
details = None
if details:
details, *sub_details = details.split(':')
else:
sub_details = []
return group, details, sub_details
def get_new_contributors(contributors_path, commits):
......@@ -444,6 +464,9 @@ def get_new_contributors(contributors_path, commits):
parser.add_argument(
'--repo', default='yt-dlp/yt-dlp',
help='the github repository to use for the operations (default: %(default)s)')
parser.add_argument(
'--collapsible', action='store_true',
help='make changelog collapsible (default: %(default)s)')
args = parser.parse_args()
logging.basicConfig(
......@@ -467,4 +490,4 @@ def get_new_contributors(contributors_path, commits):
write_file(args.contributors_path, '\n'.join(new_contributors) + '\n', mode='a')
logger.info(f'New contributors: {", ".join(new_contributors)}')
print(Changelog(commits.groups(), args.repo))
print(Changelog(commits.groups(), args.repo, args.collapsible))
......@@ -51,7 +51,7 @@ def get_git_head():
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Update the version.py file')
parser.add_argument(
'-c', '--channel', choices=['stable', 'nightly'], default='stable',
'-c', '--channel', default='stable',
help='Select update channel (default: %(default)s)')
parser.add_argument(
'-o', '--output', default='yt_dlp/version.py',
......
......@@ -8,6 +8,7 @@ ignore = E402,E501,E731,E741,W503
max_line_length = 120
per_file_ignores =
devscripts/lazy_load_template.py: F401
yt_dlp/utils/__init__.py: F401, F403
[autoflake]
......
......@@ -194,8 +194,8 @@ def sanitize_got_info_dict(got_dict):
'formats', 'thumbnails', 'subtitles', 'automatic_captions', 'comments', 'entries',
# Auto-generated
'autonumber', 'playlist', 'format_index', 'video_ext', 'audio_ext', 'duration_string', 'epoch',
'fulltitle', 'extractor', 'extractor_key', 'filepath', 'infojson_filename', 'original_url', 'n_entries',
'autonumber', 'playlist', 'format_index', 'video_ext', 'audio_ext', 'duration_string', 'epoch', 'n_entries',
'fulltitle', 'extractor', 'extractor_key', 'filename', 'filepath', 'infojson_filename', 'original_url',
# Only live_status needs to be checked
'is_live', 'was_live',
......
......@@ -757,7 +757,7 @@ def expect_same_infodict(out):
test('%(id)r %(height)r', "'1234' 1080")
test('%(ext)s-%(ext|def)d', 'mp4-def')
test('%(width|0)04d', '0000')
test('a%(width|)d', 'a', outtmpl_na_placeholder='none')
test('a%(width|b)d', 'ab', outtmpl_na_placeholder='none')
FORMATS = self.outtmpl_info['formats']
sanitize = lambda x: x.replace(':', ':').replace('"', """).replace('\n', ' ')
......@@ -871,12 +871,12 @@ def test_postprocessors(self):
class SimplePP(PostProcessor):
def run(self, info):
with open(audiofile, 'wt') as f:
with open(audiofile, 'w') as f:
f.write('EXAMPLE')
return [info['filepath']], info
def run_pp(params, PP):
with open(filename, 'wt') as f:
with open(filename, 'w') as f:
f.write('EXAMPLE')
ydl = YoutubeDL(params)
ydl.add_post_processor(PP())
......@@ -895,7 +895,7 @@ def run_pp(params, PP):
class ModifierPP(PostProcessor):
def run(self, info):
with open(info['filepath'], 'wt') as f:
with open(info['filepath'], 'w') as f:
f.write('MODIFIED')
return [], info
......
此差异已折叠。
......@@ -146,6 +146,10 @@
'https://www.youtube.com/s/player/6f20102c/player_ias.vflset/en_US/base.js',
'lE8DhoDmKqnmJJ', 'pJTTX6XyJP2BYw',
),
(
'https://www.youtube.com/s/player/cfa9e7cb/player_ias.vflset/en_US/base.js',
'aCi3iElgd2kq0bxVbQ', 'QX1y8jGb2IbZ0w',
),
]
......
......@@ -13,6 +13,7 @@
import random
import re
import shutil
import string
import subprocess
import sys
import tempfile
......@@ -21,7 +22,6 @@
import traceback
import unicodedata
import urllib.request
from string import Formatter, ascii_letters
from .cache import Cache
from .compat import compat_os_name, compat_shlex_quote
......@@ -124,7 +124,6 @@
parse_filesize,
preferredencoding,
prepend_extension,
register_socks_protocols,
remove_terminal_sequences,
render_table,
replace_extension,
......@@ -190,6 +189,7 @@ class YoutubeDL:
ap_username: Multiple-system operator account username.
ap_password: Multiple-system operator account password.
usenetrc: Use netrc for authentication instead.
netrc_location: Location of the netrc file. Defaults to ~/.netrc.
verbose: Print additional info to stdout.
quiet: Do not print messages to stdout.
no_warnings: Do not print out anything for warnings.
......@@ -738,7 +738,6 @@ def check_deprecated(param, option, suggestion):
when=when)
self._setup_opener()
register_socks_protocols()
def preload_download_archive(fn):
"""Preload the archive, if any is specified"""
......@@ -1078,7 +1077,7 @@ def _outtmpl_expandpath(outtmpl):
# correspondingly that is not what we want since we need to keep
# '%%' intact for template dict substitution step. Working around
# with boundary-alike separator hack.
sep = ''.join(random.choices(ascii_letters, k=32))
sep = ''.join(random.choices(string.ascii_letters, k=32))
outtmpl = outtmpl.replace('%%', f'%{sep}%').replace('$$', f'${sep}$')
# outtmpl should be expand_path'ed before template dict substitution
......@@ -1237,7 +1236,7 @@ def _dumpjson_default(obj):
return list(obj)
return repr(obj)
class _ReplacementFormatter(Formatter):
class _ReplacementFormatter(string.Formatter):
def get_field(self, field_name, args, kwargs):
if field_name.isdigit():
return args[0], -1
......@@ -2067,86 +2066,86 @@ def syntax_error(note, start):
def _parse_filter(tokens):
filter_parts = []
for type, string, start, _, _ in tokens:
if type == tokenize.OP and string == ']':
for type, string_, start, _, _ in tokens:
if type == tokenize.OP and string_ == ']':
return ''.join(filter_parts)
else:
filter_parts.append(string)
filter_parts.append(string_)
def _remove_unused_ops(tokens):
# Remove operators that we don't use and join them with the surrounding strings.
# E.g. 'mp4' '-' 'baseline' '-' '16x9' is converted to 'mp4-baseline-16x9'
ALLOWED_OPS = ('/', '+', ',', '(', ')')
last_string, last_start, last_end, last_line = None, None, None, None
for type, string, start, end, line in tokens:
if type == tokenize.OP and string == '[':
for type, string_, start, end, line in tokens:
if type == tokenize.OP and string_ == '[':
if last_string:
yield tokenize.NAME, last_string, last_start, last_end, last_line
last_string = None
yield type, string, start, end, line
yield type, string_, start, end, line
# everything inside brackets will be handled by _parse_filter
for type, string, start, end, line in tokens:
yield type, string, start, end, line
if type == tokenize.OP and string == ']':
for type, string_, start, end, line in tokens:
yield type, string_, start, end, line
if type == tokenize.OP and string_ == ']':
break
elif type == tokenize.OP and string in ALLOWED_OPS:
elif type == tokenize.OP and string_ in ALLOWED_OPS:
if last_string:
yield tokenize.NAME, last_string, last_start, last_end, last_line
last_string = None
yield type, string, start, end, line
yield type, string_, start, end, line
elif type in [tokenize.NAME, tokenize.NUMBER, tokenize.OP]:
if not last_string:
last_string = string
last_string = string_
last_start = start
last_end = end
else:
last_string += string
last_string += string_
if last_string:
yield tokenize.NAME, last_string, last_start, last_end, last_line
def _parse_format_selection(tokens, inside_merge=False, inside_choice=False, inside_group=False):
selectors = []
current_selector = None
for type, string, start, _, _ in tokens:
for type, string_, start, _, _ in tokens:
# ENCODING is only defined in python 3.x
if type == getattr(tokenize, 'ENCODING', None):
continue
elif type in [tokenize.NAME, tokenize.NUMBER]:
current_selector = FormatSelector(SINGLE, string, [])
current_selector = FormatSelector(SINGLE, string_, [])
elif type == tokenize.OP:
if string == ')':
if string_ == ')':
if not inside_group:
# ')' will be handled by the parentheses group
tokens.restore_last_token()
break
elif inside_merge and string in ['/', ',']:
elif inside_merge and string_ in ['/', ',']:
tokens.restore_last_token()
break
elif inside_choice and string == ',':
elif inside_choice and string_ == ',':
tokens.restore_last_token()
break
elif string == ',':
elif string_ == ',':
if not current_selector:
raise syntax_error('"," must follow a format selector', start)
selectors.append(current_selector)
current_selector = None
elif string == '/':
elif string_ == '/':
if not current_selector:
raise syntax_error('"/" must follow a format selector', start)
first_choice = current_selector
second_choice = _parse_format_selection(tokens, inside_choice=True)
current_selector = FormatSelector(PICKFIRST, (first_choice, second_choice), [])
elif string == '[':
elif string_ == '[':
if not current_selector:
current_selector = FormatSelector(SINGLE, 'best', [])
format_filter = _parse_filter(tokens)
current_selector.filters.append(format_filter)
elif string == '(':
elif string_ == '(':
if current_selector:
raise syntax_error('Unexpected "("', start)
group = _parse_format_selection(tokens, inside_group=True)
current_selector = FormatSelector(GROUP, group, [])
elif string == '+':
elif string_ == '+':
if not current_selector:
raise syntax_error('Unexpected "+"', start)
selector_1 = current_selector
......@@ -2155,7 +2154,7 @@ def _parse_format_selection(tokens, inside_merge=False, inside_choice=False, ins
raise syntax_error('Expected a selector', start)
current_selector = FormatSelector(MERGE, (selector_1, selector_2), [])
else:
raise syntax_error(f'Operator not recognized: "{string}"', start)
raise syntax_error(f'Operator not recognized: "{string_}"', start)
elif type == tokenize.ENDMARKER:
break
if current_selector:
......@@ -2381,7 +2380,9 @@ def restore_last_token(self):
def _calc_headers(self, info_dict):
res = merge_headers(self.params['http_headers'], info_dict.get('http_headers') or {})
if 'Youtubedl-No-Compression' in res: # deprecated
res.pop('Youtubedl-No-Compression', None)
res['Accept-Encoding'] = 'identity'
cookies = self._calc_cookies(info_dict['url'])
if cookies:
res['Cookie'] = cookies
......@@ -2897,7 +2898,7 @@ def format_tmpl(tmpl):
fmt = '%({})s'
if tmpl.startswith('{'):
tmpl = f'.{tmpl}'
tmpl, fmt = f'.{tmpl}', '%({})j'
if tmpl.endswith('='):
tmpl, fmt = tmpl[:-1], '{0} = %({0})#j'
return '\n'.join(map(fmt.format, [tmpl] if mobj.group('dict') else tmpl.split(',')))
......@@ -2936,7 +2937,8 @@ def print_field(field, actual_field=None, optional=False):
print_field('url', 'urls')
print_field('thumbnail', optional=True)
print_field('description', optional=True)
print_field('filename', optional=True)
if filename:
print_field('filename')
if self.params.get('forceduration') and info_copy.get('duration') is not None:
self.to_stdout(formatSeconds(info_copy['duration']))
print_field('format')
......@@ -3418,8 +3420,8 @@ def sanitize_info(info_dict, remove_private_keys=False):
if remove_private_keys:
reject = lambda k, v: v is None or k.startswith('__') or k in {
'requested_downloads', 'requested_formats', 'requested_subtitles', 'requested_entries',
'entries', 'filepath', '_filename', 'infojson_filename', 'original_url', 'playlist_autonumber',
'_format_sort_fields',
'entries', 'filepath', '_filename', 'filename', 'infojson_filename', 'original_url',
'playlist_autonumber', '_format_sort_fields',
}
else:
reject = lambda k, v: False
......@@ -3488,7 +3490,7 @@ def run_pp(self, pp, infodict):
*files_to_delete, info=infodict, msg='Deleting original file %s (pass -k to keep)')
return infodict
def run_all_pps(self, key, info, *, additional_pps=None, fatal=True):
def run_all_pps(self, key, info, *, additional_pps=None):
if key != 'video':
self._forceprint(key, info)
for pp in (additional_pps or []) + self._pps[key]:
......@@ -3994,7 +3996,7 @@ def _write_subtitles(self, info_dict, filename):
# that way it will silently go on when used with unsupporting IE
return ret
elif not subtitles:
self.to_screen('[info] There\'s no subtitles for the requested languages')
self.to_screen('[info] There are no subtitles for the requested languages')
return ret
sub_filename_base = self.prepare_filename(info_dict, 'subtitle')
if not sub_filename_base:
......@@ -4048,7 +4050,7 @@ def _write_thumbnails(self, label, info_dict, filename, thumb_filename_base=None
if write_all or self.params.get('writethumbnail', False):
thumbnails = info_dict.get('thumbnails') or []
if not thumbnails:
self.to_screen(f'[info] There\'s no {label} thumbnails to download')
self.to_screen(f'[info] There are no {label} thumbnails to download')
return ret
multiple = write_all and len(thumbnails) > 1
......
......@@ -13,6 +13,7 @@
import os
import re
import sys
import traceback
from .compat import compat_shlex_quote
from .cookies import SUPPORTED_BROWSERS, SUPPORTED_KEYRINGS
......@@ -937,14 +938,18 @@ def _real_main(argv=None):
if opts.rm_cachedir:
ydl.cache.remove()
updater = Updater(ydl, opts.update_self if isinstance(opts.update_self, str) else None)
if opts.update_self and updater.update() and actual_use:
if updater.cmd:
return updater.restart()
# This code is reachable only for zip variant in py < 3.10
# It makes sense to exit here, but the old behavior is to continue
ydl.report_warning('Restart yt-dlp to use the updated version')
# return 100, 'ERROR: The program must exit for the update to complete'
try:
updater = Updater(ydl, opts.update_self)
if opts.update_self and updater.update() and actual_use:
if updater.cmd:
return updater.restart()
# This code is reachable only for zip variant in py < 3.10
# It makes sense to exit here, but the old behavior is to continue
ydl.report_warning('Restart yt-dlp to use the updated version')
# return 100, 'ERROR: The program must exit for the update to complete'
except Exception:
traceback.print_exc()
ydl._download_retcode = 100
if not actual_use:
if pre_process:
......
......@@ -23,7 +23,6 @@
encodeArgument,
encodeFilename,
find_available_port,
handle_youtubedl_headers,
remove_end,
sanitized_Request,
traverse_obj,
......@@ -529,10 +528,9 @@ def _call_downloader(self, tmpfilename, info_dict):
selected_formats = info_dict.get('requested_formats') or [info_dict]
for i, fmt in enumerate(selected_formats):
if fmt.get('http_headers') and re.match(r'^https?://', fmt['url']):
headers_dict = handle_youtubedl_headers(fmt['http_headers'])
# Trailing \r\n after each HTTP header is important to prevent warning from ffmpeg/avconv:
# [http @ 00000000003d2fa0] No trailing CRLF found in HTTP header.
args.extend(['-headers', ''.join(f'{key}: {val}\r\n' for key, val in headers_dict.items())])
args.extend(['-headers', ''.join(f'{key}: {val}\r\n' for key, val in fmt['http_headers'].items())])
if start_time:
args += ['-ss', str(start_time)]
......
......@@ -45,8 +45,8 @@ class DownloadContext(dict):
ctx.tmpfilename = self.temp_name(filename)
ctx.stream = None
# Do not include the Accept-Encoding header
headers = {'Youtubedl-no-compression': 'True'}
# Disable compression
headers = {'Accept-Encoding': 'identity'}
add_headers = info_dict.get('http_headers')
if add_headers:
headers.update(add_headers)
......
......@@ -247,7 +247,6 @@
from .bostonglobe import BostonGlobeIE
from .box import BoxIE
from .boxcast import BoxCastVideoIE
from .booyah import BooyahClipsIE
from .bpb import BpbIE
from .br import (
BRIE,
......
from .common import InfoExtractor
from ..utils import int_or_none, str_or_none, traverse_obj
class BooyahBaseIE(InfoExtractor):
_BOOYAH_SESSION_KEY = None
def _real_initialize(self):
BooyahBaseIE._BOOYAH_SESSION_KEY = self._request_webpage(
'https://booyah.live/api/v3/auths/sessions', None, data=b'').getheader('booyah-session-key')
def _get_comments(self, video_id):
comment_json = self._download_json(
f'https://booyah.live/api/v3/playbacks/{video_id}/comments/tops', video_id,
headers={'Booyah-Session-Key': self._BOOYAH_SESSION_KEY}, fatal=False) or {}
return [{
'id': comment.get('comment_id'),
'author': comment.get('from_nickname'),
'author_id': comment.get('from_uid'),
'author_thumbnail': comment.get('from_thumbnail'),
'text': comment.get('content'),
'timestamp': comment.get('create_time'),
'like_count': comment.get('like_cnt'),
} for comment in comment_json.get('comment_list') or ()]
class BooyahClipsIE(BooyahBaseIE):
_VALID_URL = r'https?://booyah.live/clips/(?P<id>\d+)'
_TESTS = [{
'url': 'https://booyah.live/clips/13887261322952306617',
'info_dict': {
'id': '13887261322952306617',
'ext': 'mp4',
'view_count': int,
'duration': 30,
'channel_id': 90565760,
'like_count': int,
'title': 'Cayendo con estilo 😎',
'uploader': '♡LɪꜱGΛ​MER​',
'comment_count': int,
'uploader_id': '90565760',
'thumbnail': 'https://resmambet-a.akamaihd.net/mambet-storage/Clip/90565760/90565760-27204374-fba0-409d-9d7b-63a48b5c0e75.jpg',
'upload_date': '20220617',
'timestamp': 1655490556,
'modified_timestamp': 1655490556,
'modified_date': '20220617',
}
}]
def _real_extract(self, url):
video_id = self._match_id(url)
json_data = self._download_json(
f'https://booyah.live/api/v3/playbacks/{video_id}', video_id,
headers={'Booyah-Session-key': self._BOOYAH_SESSION_KEY})
formats = []
for video_data in json_data['playback']['endpoint_list']:
formats.extend(({
'url': video_data.get('stream_url'),
'ext': 'mp4',
'height': video_data.get('resolution'),
}, {
'url': video_data.get('download_url'),
'ext': 'mp4',
'format_note': 'Watermarked',
'height': video_data.get('resolution'),
'preference': -10,
}))
return {
'id': video_id,
'title': traverse_obj(json_data, ('playback', 'name')),
'thumbnail': traverse_obj(json_data, ('playback', 'thumbnail_url')),
'formats': formats,
'view_count': traverse_obj(json_data, ('playback', 'views')),
'like_count': traverse_obj(json_data, ('playback', 'likes')),
'duration': traverse_obj(json_data, ('playback', 'duration')),
'comment_count': traverse_obj(json_data, ('playback', 'comment_cnt')),
'channel_id': traverse_obj(json_data, ('playback', 'channel_id')),
'uploader': traverse_obj(json_data, ('user', 'nickname')),
'uploader_id': str_or_none(traverse_obj(json_data, ('user', 'uid'))),
'modified_timestamp': int_or_none(traverse_obj(json_data, ('playback', 'update_time_ms')), 1000),
'timestamp': int_or_none(traverse_obj(json_data, ('playback', 'create_time_ms')), 1000),
'__post_extractor': self.extract_comments(video_id, self._get_comments(video_id)),
}
......@@ -113,7 +113,7 @@ def _real_extract(self, url):
entry_protocol='m3u8_native', m3u8_id='hls')
for a_format in formats:
# LiTV HLS segments doesn't like compressions
a_format.setdefault('http_headers', {})['Youtubedl-no-compression'] = True
a_format.setdefault('http_headers', {})['Accept-Encoding'] = 'identity'
title = program_info['title'] + program_info.get('secondaryMark', '')
description = program_info.get('description')
......
......@@ -131,8 +131,9 @@ class KnownPiracyIE(UnsupportedInfoExtractor):
URLS = (
r'dood\.(?:to|watch|so|pm|wf|re)',
# Sites youtube-dl supports, but we won't
r'https://viewsb\.com',
r'https://filemoon\.sx',
r'viewsb\.com',
r'filemoon\.sx',
r'hentai\.animestigma\.com',
)
_TESTS = [{
......
......@@ -4579,8 +4579,11 @@ def _grid_entries(self, grid_renderer):
def _music_reponsive_list_entry(self, renderer):
video_id = traverse_obj(renderer, ('playlistItemData', 'videoId'))
if video_id:
title = traverse_obj(renderer, (
'flexColumns', 0, 'musicResponsiveListItemFlexColumnRenderer',
'text', 'runs', 0, 'text'))
return self.url_result(f'https://music.youtube.com/watch?v={video_id}',
ie=YoutubeIE.ie_key(), video_id=video_id)
ie=YoutubeIE.ie_key(), video_id=video_id, title=title)
playlist_id = traverse_obj(renderer, ('navigationEndpoint', 'watchEndpoint', 'playlistId'))
if playlist_id:
video_id = traverse_obj(renderer, ('navigationEndpoint', 'watchEndpoint', 'videoId'))
......@@ -4639,11 +4642,19 @@ def _playlist_entries(self, video_list_renderer):
def _rich_entries(self, rich_grid_renderer):
renderer = traverse_obj(
rich_grid_renderer, ('content', ('videoRenderer', 'reelItemRenderer')), get_all=False) or {}
rich_grid_renderer,
('content', ('videoRenderer', 'reelItemRenderer', 'playlistRenderer')), get_all=False) or {}
video_id = renderer.get('videoId')
if not video_id:
if video_id:
yield self._extract_video(renderer)
return
playlist_id = renderer.get('playlistId')
if playlist_id:
yield self.url_result(
f'https://www.youtube.com/playlist?list={playlist_id}',
ie=YoutubeTabIE.ie_key(), video_id=playlist_id,
video_title=self._get_text(renderer, 'title'))
return
yield self._extract_video(renderer)
def _video_entry(self, video_renderer):
video_id = video_renderer.get('videoId')
......@@ -6185,6 +6196,40 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
'uploader': '3Blue1Brown',
},
'playlist_count': 0,
}, {
# Podcasts tab, with rich entry playlistRenderers
'url': 'https://www.youtube.com/@99percentinvisiblepodcast/podcasts',
'info_dict': {
'id': 'UCVMF2HD4ZgC0QHpU9Yq5Xrw',
'channel_id': 'UCVMF2HD4ZgC0QHpU9Yq5Xrw',
'uploader_url': 'https://www.youtube.com/@99percentinvisiblepodcast',
'description': 'md5:3a0ed38f1ad42a68ef0428c04a15695c',
'title': '99 Percent Invisible - Podcasts',
'uploader': '99 Percent Invisible',
'channel_follower_count': int,
'channel_url': 'https://www.youtube.com/channel/UCVMF2HD4ZgC0QHpU9Yq5Xrw',
'tags': [],
'channel': '99 Percent Invisible',
'uploader_id': '@99percentinvisiblepodcast',
},
'playlist_count': 1,
}, {
# Releases tab, with rich entry playlistRenderers (same as Podcasts tab)
'url': 'https://www.youtube.com/@AHimitsu/releases',
'info_dict': {
'id': 'UCgFwu-j5-xNJml2FtTrrB3A',
'channel': 'A Himitsu',
'uploader_url': 'https://www.youtube.com/@AHimitsu',
'title': 'A Himitsu - Releases',
'uploader_id': '@AHimitsu',
'uploader': 'A Himitsu',
'channel_id': 'UCgFwu-j5-xNJml2FtTrrB3A',
'tags': 'count:16',
'description': 'I make music',
'channel_url': 'https://www.youtube.com/channel/UCgFwu-j5-xNJml2FtTrrB3A',
'channel_follower_count': int,
},
'playlist_mincount': 10,
}]
@classmethod
......
......@@ -20,7 +20,12 @@
def _js_bit_op(op):
def zeroise(x):
return 0 if x in (None, JS_Undefined) else x
if x in (None, JS_Undefined):
return 0
with contextlib.suppress(TypeError):
if math.isnan(x): # NB: NaN cannot be checked by membership
return 0
return x
def wrapped(a, b):
return op(zeroise(a), zeroise(b)) & 0xffffffff
......@@ -243,7 +248,7 @@ def _separate(expr, delim=',', max_split=None):
return
counters = {k: 0 for k in _MATCHING_PARENS.values()}
start, splits, pos, delim_len = 0, 0, 0, len(delim) - 1
in_quote, escaping, after_op, in_regex_char_group, in_unary_op = None, False, True, False, False
in_quote, escaping, after_op, in_regex_char_group = None, False, True, False
for idx, char in enumerate(expr):
if not in_quote and char in _MATCHING_PARENS:
counters[_MATCHING_PARENS[char]] += 1
......
......@@ -323,7 +323,7 @@ def _alias_callback(option, opt_str, value, parser, opts, nargs):
help='Print program version and exit')
general.add_option(
'-U', '--update',
action='store_true', dest='update_self',
action='store_const', dest='update_self', const=CHANNEL,
help=format_field(
is_non_updateable(), None, 'Check if updates are available. %s',
default=f'Update this program to the latest {CHANNEL} version'))
......@@ -335,9 +335,9 @@ def _alias_callback(option, opt_str, value, parser, opts, nargs):
'--update-to',
action='store', dest='update_self', metavar='[CHANNEL]@[TAG]',
help=(
'Upgrade/downgrade to a specific version. CHANNEL and TAG defaults to '
f'"{CHANNEL}" and "latest" respectively if omitted; See "UPDATE" for details. '
f'Supported channels: {", ".join(UPDATE_SOURCES)}'))
'Upgrade/downgrade to a specific version. CHANNEL can be a repository as well. '
f'CHANNEL and TAG default to "{CHANNEL.partition("@")[0]}" and "latest" respectively if omitted; '
f'See "UPDATE" for details. Supported channels: {", ".join(UPDATE_SOURCES)}'))
general.add_option(
'-i', '--ignore-errors',
action='store_true', dest='ignoreerrors',
......@@ -411,7 +411,7 @@ def _alias_callback(option, opt_str, value, parser, opts, nargs):
general.add_option(
'--no-flat-playlist',
action='store_false', dest='extract_flat',
help='Extract the videos of a playlist')
help='Fully extract the videos of a playlist (default)')
general.add_option(
'--live-from-start',
action='store_true', dest='live_from_start',
......@@ -521,11 +521,11 @@ def _alias_callback(option, opt_str, value, parser, opts, nargs):
help=optparse.SUPPRESS_HELP)
geo.add_option(
'--xff', metavar='VALUE',
dest='geo_bypass', default="default",
dest='geo_bypass', default='default',
help=(
'How to fake X-Forwarded-For HTTP header to try bypassing geographic restriction. '
'One of "default" (Only when known to be useful), "never", '
'a two-letter ISO 3166-2 country code, or an IP block in CIDR notation'))
'One of "default" (only when known to be useful), "never", '
'an IP block in CIDR notation, or a two-letter ISO 3166-2 country code'))
geo.add_option(
'--geo-bypass',
action='store_const', dest='geo_bypass', const='default',
......@@ -617,7 +617,7 @@ def _alias_callback(option, opt_str, value, parser, opts, nargs):
'that contains the phrase "cats & dogs" (caseless). '
'Use "--match-filter -" to interactively ask whether to download each video'))
selection.add_option(
'--no-match-filter',
'--no-match-filters',
dest='match_filter', action='store_const', const=None,
help='Do not use any --match-filter (default)')
selection.add_option(
......
......@@ -16,6 +16,7 @@
Popen,
cached_method,
deprecation_warning,
network_exceptions,
remove_end,
remove_start,
sanitized_Request,
......@@ -128,27 +129,36 @@ def __init__(self, ydl, target=None):
self.ydl = ydl
self.target_channel, sep, self.target_tag = (target or CHANNEL).rpartition('@')
if not sep and self.target_tag in UPDATE_SOURCES: # stable => stable@latest
self.target_channel, self.target_tag = self.target_tag, None
# stable => stable@latest
if not sep and ('/' in self.target_tag or self.target_tag in UPDATE_SOURCES):
self.target_channel = self.target_tag
self.target_tag = None
elif not self.target_channel:
self.target_channel = CHANNEL
self.target_channel = CHANNEL.partition('@')[0]
if not self.target_tag:
self.target_tag, self._exact = 'latest', False
self.target_tag = 'latest'
self._exact = False
elif self.target_tag != 'latest':
self.target_tag = f'tags/{self.target_tag}'
@property
def _target_repo(self):
try:
return UPDATE_SOURCES[self.target_channel]
except KeyError:
return self._report_error(
f'Invalid update channel {self.target_channel!r} requested. '
f'Valid channels are {", ".join(UPDATE_SOURCES)}', True)
if '/' in self.target_channel:
self._target_repo = self.target_channel
if self.target_channel not in (CHANNEL, *UPDATE_SOURCES.values()):
self.ydl.report_warning(
f'You are switching to an {self.ydl._format_err("unofficial", "red")} executable '
f'from {self.ydl._format_err(self._target_repo, self.ydl.Styles.EMPHASIS)}. '
f'Run {self.ydl._format_err("at your own risk", "light red")}')
self.restart = self._blocked_restart
else:
self._target_repo = UPDATE_SOURCES.get(self.target_channel)
if not self._target_repo:
self._report_error(
f'Invalid update channel {self.target_channel!r} requested. '
f'Valid channels are {", ".join(UPDATE_SOURCES)}', True)
def _version_compare(self, a, b, channel=CHANNEL):
if channel != self.target_channel:
if self._exact and channel != self.target_channel:
return False
if _VERSION_RE.fullmatch(f'{a}.{b}'):
......@@ -258,8 +268,8 @@ def check_update(self):
self.ydl.to_screen((
f'Available version: {self._label(self.target_channel, self.latest_version)}, ' if self.target_tag == 'latest' else ''
) + f'Current version: {self._label(CHANNEL, self.current_version)}')
except Exception:
return self._report_network_error('obtain version info', delim='; Please try again later or')
except network_exceptions as e:
return self._report_network_error(f'obtain version info ({e})', delim='; Please try again later or')
if not is_non_updateable():
self.ydl.to_screen(f'Current Build Hash: {_sha256_file(self.filename)}')
......@@ -303,7 +313,7 @@ def update(self):
try:
newcontent = self._download(self.release_name, self._tag)
except Exception as e:
except network_exceptions as e:
if isinstance(e, urllib.error.HTTPError) and e.code == 404:
return self._report_error(
f'The requested tag {self._label(self.target_channel, self.target_tag)} does not exist', True)
......@@ -371,6 +381,12 @@ def restart(self):
_, _, returncode = Popen.run(self.cmd)
return returncode
def _blocked_restart(self):
self._report_error(
'Automatically restarting into custom builds is disabled for security reasons. '
'Restart yt-dlp to use the updated version', expected=True)
return self.ydl._download_retcode
def run_update(ydl):
"""Update the program file with the latest version from the repository
......
import warnings
from ..compat.compat_utils import passthrough_module
# XXX: Implement this the same way as other DeprecationWarnings without circular import
passthrough_module(__name__, '._legacy', callback=lambda attr: warnings.warn(
DeprecationWarning(f'{__name__}.{attr} is deprecated'), stacklevel=5))
del passthrough_module
# isort: off
from .traversal import *
from ._utils import *
from ._utils import _configuration_args, _get_exe_version_output
from ._deprecated import *
"""Deprecated - New code should avoid these"""
from ._utils import preferredencoding
def encodeFilename(s, for_subprocess=False):
assert isinstance(s, str)
return s
def decodeFilename(b, for_subprocess=False):
return b
def decodeArgument(b):
return b
def decodeOption(optval):
if optval is None:
return optval
if isinstance(optval, bytes):
optval = optval.decode(preferredencoding())
assert isinstance(optval, str)
return optval
def error_to_compat_str(err):
return str(err)
"""No longer used and new code should not use. Exists only for API compat."""
import platform
import struct
import sys
import urllib.parse
import zlib
from ._utils import decode_base_n, preferredencoding
from .traversal import traverse_obj
from ..dependencies import certifi, websockets
has_certifi = bool(certifi)
has_websockets = bool(websockets)
def load_plugins(name, suffix, namespace):
from ..plugins import load_plugins
ret = load_plugins(name, suffix)
namespace.update(ret)
return ret
def traverse_dict(dictn, keys, casesense=True):
return traverse_obj(dictn, keys, casesense=casesense, is_user_input=True, traverse_string=True)
def decode_base(value, digits):
return decode_base_n(value, table=digits)
def platform_name():
""" Returns the platform name as a str """
return platform.platform()
def get_subprocess_encoding():
if sys.platform == 'win32' and sys.getwindowsversion()[0] >= 5:
# For subprocess calls, encode with locale encoding
# Refer to http://stackoverflow.com/a/9951851/35070
encoding = preferredencoding()
else:
encoding = sys.getfilesystemencoding()
if encoding is None:
encoding = 'utf-8'
return encoding
# UNUSED
# Based on png2str() written by @gdkchan and improved by @yokrysty
# Originally posted at https://github.com/ytdl-org/youtube-dl/issues/9706
def decode_png(png_data):
# Reference: https://www.w3.org/TR/PNG/
header = png_data[8:]
if png_data[:8] != b'\x89PNG\x0d\x0a\x1a\x0a' or header[4:8] != b'IHDR':
raise OSError('Not a valid PNG file.')
int_map = {1: '>B', 2: '>H', 4: '>I'}
unpack_integer = lambda x: struct.unpack(int_map[len(x)], x)[0]
chunks = []
while header:
length = unpack_integer(header[:4])
header = header[4:]
chunk_type = header[:4]
header = header[4:]
chunk_data = header[:length]
header = header[length:]
header = header[4:] # Skip CRC
chunks.append({
'type': chunk_type,
'length': length,
'data': chunk_data
})
ihdr = chunks[0]['data']
width = unpack_integer(ihdr[:4])
height = unpack_integer(ihdr[4:8])
idat = b''
for chunk in chunks:
if chunk['type'] == b'IDAT':
idat += chunk['data']
if not idat:
raise OSError('Unable to read PNG data.')
decompressed_data = bytearray(zlib.decompress(idat))
stride = width * 3
pixels = []
def _get_pixel(idx):
x = idx % stride
y = idx // stride
return pixels[y][x]
for y in range(height):
basePos = y * (1 + stride)
filter_type = decompressed_data[basePos]
current_row = []
pixels.append(current_row)
for x in range(stride):
color = decompressed_data[1 + basePos + x]
basex = y * stride + x
left = 0
up = 0
if x > 2:
left = _get_pixel(basex - 3)
if y > 0:
up = _get_pixel(basex - stride)
if filter_type == 1: # Sub
color = (color + left) & 0xff
elif filter_type == 2: # Up
color = (color + up) & 0xff
elif filter_type == 3: # Average
color = (color + ((left + up) >> 1)) & 0xff
elif filter_type == 4: # Paeth
a = left
b = up
c = 0
if x > 2 and y > 0:
c = _get_pixel(basex - stride - 3)
p = a + b - c
pa = abs(p - a)
pb = abs(p - b)
pc = abs(p - c)
if pa <= pb and pa <= pc:
color = (color + a) & 0xff
elif pb <= pc:
color = (color + b) & 0xff
else:
color = (color + c) & 0xff
current_row.append(color)
return width, height, pixels
def register_socks_protocols():
# "Register" SOCKS protocols
# In Python < 2.6.5, urlsplit() suffers from bug https://bugs.python.org/issue7904
# URLs with protocols not in urlparse.uses_netloc are not handled correctly
for scheme in ('socks', 'socks4', 'socks4a', 'socks5'):
if scheme not in urllib.parse.uses_netloc:
urllib.parse.uses_netloc.append(scheme)
def handle_youtubedl_headers(headers):
filtered_headers = headers
if 'Youtubedl-no-compression' in filtered_headers:
filtered_headers = {k: v for k, v in filtered_headers.items() if k.lower() != 'accept-encoding'}
del filtered_headers['Youtubedl-no-compression']
return filtered_headers
import collections.abc
import contextlib
import inspect
import itertools
import re
from ._utils import (
IDENTITY,
NO_DEFAULT,
LazyList,
int_or_none,
is_iterable_like,
try_call,
variadic,
)
def traverse_obj(
obj, *paths, default=NO_DEFAULT, expected_type=None, get_all=True,
casesense=True, is_user_input=False, traverse_string=False):
"""
Safely traverse nested `dict`s and `Iterable`s
>>> obj = [{}, {"key": "value"}]
>>> traverse_obj(obj, (1, "key"))
"value"
Each of the provided `paths` is tested and the first producing a valid result will be returned.
The next path will also be tested if the path branched but no results could be found.
Supported values for traversal are `Mapping`, `Iterable` and `re.Match`.
Unhelpful values (`{}`, `None`) are treated as the absence of a value and discarded.
The paths will be wrapped in `variadic`, so that `'key'` is conveniently the same as `('key', )`.
The keys in the path can be one of:
- `None`: Return the current object.
- `set`: Requires the only item in the set to be a type or function,
like `{type}`/`{func}`. If a `type`, returns only values
of this type. If a function, returns `func(obj)`.
- `str`/`int`: Return `obj[key]`. For `re.Match`, return `obj.group(key)`.
- `slice`: Branch out and return all values in `obj[key]`.
- `Ellipsis`: Branch out and return a list of all values.
- `tuple`/`list`: Branch out and return a list of all matching values.
Read as: `[traverse_obj(obj, branch) for branch in branches]`.
- `function`: Branch out and return values filtered by the function.
Read as: `[value for key, value in obj if function(key, value)]`.
For `Iterable`s, `key` is the index of the value.
For `re.Match`es, `key` is the group number (0 = full match)
as well as additionally any group names, if given.
- `dict` Transform the current object and return a matching dict.
Read as: `{key: traverse_obj(obj, path) for key, path in dct.items()}`.
`tuple`, `list`, and `dict` all support nested paths and branches.
@params paths Paths which to traverse by.
@param default Value to return if the paths do not match.
If the last key in the path is a `dict`, it will apply to each value inside
the dict instead, depth first. Try to avoid if using nested `dict` keys.
@param expected_type If a `type`, only accept final values of this type.
If any other callable, try to call the function on each result.
If the last key in the path is a `dict`, it will apply to each value inside
the dict instead, recursively. This does respect branching paths.
@param get_all If `False`, return the first matching result, otherwise all matching ones.
@param casesense If `False`, consider string dictionary keys as case insensitive.
The following are only meant to be used by YoutubeDL.prepare_outtmpl and are not part of the API
@param is_user_input Whether the keys are generated from user input.
If `True` strings get converted to `int`/`slice` if needed.
@param traverse_string Whether to traverse into objects as strings.
If `True`, any non-compatible object will first be
converted into a string and then traversed into.
The return value of that path will be a string instead,
not respecting any further branching.
@returns The result of the object traversal.
If successful, `get_all=True`, and the path branches at least once,
then a list of results is returned instead.
If no `default` is given and the last path branches, a `list` of results
is always returned. If a path ends on a `dict` that result will always be a `dict`.
"""
casefold = lambda k: k.casefold() if isinstance(k, str) else k
if isinstance(expected_type, type):
type_test = lambda val: val if isinstance(val, expected_type) else None
else:
type_test = lambda val: try_call(expected_type or IDENTITY, args=(val,))
def apply_key(key, obj, is_last):
branching = False
result = None
if obj is None and traverse_string:
if key is ... or callable(key) or isinstance(key, slice):
branching = True
result = ()
elif key is None:
result = obj
elif isinstance(key, set):
assert len(key) == 1, 'Set should only be used to wrap a single item'
item = next(iter(key))
if isinstance(item, type):
if isinstance(obj, item):
result = obj
else:
result = try_call(item, args=(obj,))
elif isinstance(key, (list, tuple)):
branching = True
result = itertools.chain.from_iterable(
apply_path(obj, branch, is_last)[0] for branch in key)
elif key is ...:
branching = True
if isinstance(obj, collections.abc.Mapping):
result = obj.values()
elif is_iterable_like(obj):
result = obj
elif isinstance(obj, re.Match):
result = obj.groups()
elif traverse_string:
branching = False
result = str(obj)
else:
result = ()
elif callable(key):
branching = True
if isinstance(obj, collections.abc.Mapping):
iter_obj = obj.items()
elif is_iterable_like(obj):
iter_obj = enumerate(obj)
elif isinstance(obj, re.Match):
iter_obj = itertools.chain(
enumerate((obj.group(), *obj.groups())),
obj.groupdict().items())
elif traverse_string:
branching = False
iter_obj = enumerate(str(obj))
else:
iter_obj = ()
result = (v for k, v in iter_obj if try_call(key, args=(k, v)))
if not branching: # string traversal
result = ''.join(result)
elif isinstance(key, dict):
iter_obj = ((k, _traverse_obj(obj, v, False, is_last)) for k, v in key.items())
result = {
k: v if v is not None else default for k, v in iter_obj
if v is not None or default is not NO_DEFAULT
} or None
elif isinstance(obj, collections.abc.Mapping):
result = (try_call(obj.get, args=(key,)) if casesense or try_call(obj.__contains__, args=(key,)) else
next((v for k, v in obj.items() if casefold(k) == key), None))
elif isinstance(obj, re.Match):
if isinstance(key, int) or casesense:
with contextlib.suppress(IndexError):
result = obj.group(key)
elif isinstance(key, str):
result = next((v for k, v in obj.groupdict().items() if casefold(k) == key), None)
elif isinstance(key, (int, slice)):
if is_iterable_like(obj, collections.abc.Sequence):
branching = isinstance(key, slice)
with contextlib.suppress(IndexError):
result = obj[key]
elif traverse_string:
with contextlib.suppress(IndexError):
result = str(obj)[key]
return branching, result if branching else (result,)
def lazy_last(iterable):
iterator = iter(iterable)
prev = next(iterator, NO_DEFAULT)
if prev is NO_DEFAULT:
return
for item in iterator:
yield False, prev
prev = item
yield True, prev
def apply_path(start_obj, path, test_type):
objs = (start_obj,)
has_branched = False
key = None
for last, key in lazy_last(variadic(path, (str, bytes, dict, set))):
if is_user_input and isinstance(key, str):
if key == ':':
key = ...
elif ':' in key:
key = slice(*map(int_or_none, key.split(':')))
elif int_or_none(key) is not None:
key = int(key)
if not casesense and isinstance(key, str):
key = key.casefold()
if __debug__ and callable(key):
# Verify function signature
inspect.signature(key).bind(None, None)
new_objs = []
for obj in objs:
branching, results = apply_key(key, obj, last)
has_branched |= branching
new_objs.append(results)
objs = itertools.chain.from_iterable(new_objs)
if test_type and not isinstance(key, (dict, list, tuple)):
objs = map(type_test, objs)
return objs, has_branched, isinstance(key, dict)
def _traverse_obj(obj, path, allow_empty, test_type):
results, has_branched, is_dict = apply_path(obj, path, test_type)
results = LazyList(item for item in results if item not in (None, {}))
if get_all and has_branched:
if results:
return results.exhaust()
if allow_empty:
return [] if default is NO_DEFAULT else default
return None
return results[0] if results else {} if allow_empty and is_dict else None
for index, path in enumerate(paths, 1):
result = _traverse_obj(obj, path, index == len(paths), True)
if result is not None:
return result
return None if default is NO_DEFAULT else default
def get_first(obj, *paths, **kwargs):
return traverse_obj(obj, *((..., *variadic(keys)) for keys in paths), **kwargs, get_all=False)
def dict_get(d, key_or_keys, default=None, skip_false_values=True):
for val in map(d.get, variadic(key_or_keys)):
if val is not None and (val or not skip_false_values):
return val
return default