Procesamiento de audio con Transformers#
En este notebook, utilizaremos la biblioteca Transformers de Hugging Face para procesar datos de audio.
Clasificación zero-shot#
Comencemos con un problema ya visto en visión por computadora: la clasificación zero-shot. Consiste en determinar el origen de un fragmento de audio sin entrenamiento previo en categorías específicas.
Implementación#
Para ello, utilizamos el conjunto de datos ESC-50 (ashraq/esc50), que contiene grabaciones de 5 segundos distribuidas en 50 categorías diferentes.
Para descargarlo, usamos la biblioteca datasets de Hugging Face:
from datasets import load_dataset
from transformers import pipeline
from IPython.display import Audio as IPythonAudio
/home/aquilae/anaconda3/envs/dev/lib/python3.11/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
dataset = load_dataset("ashraq/esc50",split="train[0:10]")
Podemos observar los metadatos del fragmento de audio. La tasa de muestreo (sampling_rate) es particularmente importante. Debemos asegurarnos de que coincida con la de los datos de entrenamiento del modelo.
audio_sample = dataset[0]
audio_sample
{'filename': '1-100038-A-14.wav',
'fold': 1,
'target': 14,
'category': 'chirping_birds',
'esc10': False,
'src_file': 100038,
'take': 'A',
'audio': {'path': None,
'array': array([-0.01184082, -0.10336304, -0.14141846, ..., 0.06985474,
0.04049683, 0.00274658]),
'sampling_rate': 44100}}
Podemos escuchar el fragmento de audio (cuidado con subir demasiado el volumen de tu computadora) usando IPython.
IPythonAudio(audio_sample["audio"]["array"],rate=audio_sample["audio"]["sampling_rate"])
Es hora de usar el pipeline de Hugging Face para obtener nuestro modelo.
Utilizamos el modelo CLAP de LAION-AI (laion/clap-htsat-unfused).
audio_zero_shot = pipeline(task="zero-shot-audio-classification",model="laion/clap-htsat-unfused")
Verifiquemos la tasa de muestreo del modelo para confirmar si coincide con la de nuestros datos.
print("Sampling rate du modèle : ",audio_zero_shot.feature_extractor.sampling_rate)
print("Sampling rate de notre extrait : ",audio_sample["audio"]["sampling_rate"])
Sampling rate du modèle : 48000
Sampling rate de notre extrait : 44100
Debemos ajustar la tasa de muestreo de nuestro conjunto de datos para que sea compatible con el modelo.
from datasets import Audio
dataset = dataset.cast_column("audio",Audio(sampling_rate=48_000))
audio_sample = dataset[0]
print("Sampling rate de notre extrait : ",audio_sample["audio"]["sampling_rate"])
Sampling rate de notre extrait : 48000
Ahora que los fragmentos y el modelo están sincronizados, podemos proceder con la clasificación. Propondremos etiquetas candidatas (similar al modelo CLIP en visión por computadora).
candidate_labels = ["Sound of a dog","Sound of cat"]
outputs=audio_zero_shot(audio_sample["audio"]["array"],candidate_labels=candidate_labels)
print("Score de "+candidate_labels[0],outputs[0]["score"])
print("Score de "+candidate_labels[1],outputs[1]["score"])
Score de Sound of a dog 0.9805886149406433
Score de Sound of cat 0.019411340355873108
El modelo es capaz de identificar que el fragmento de audio corresponde a un ladrido de perro y no a un maullido de gato. Puedes probar con tus propios fragmentos de audio o con otros ejemplos del conjunto de datos.
Reconocimiento automático del habla#
El reconocimiento automático del habla consiste en transcribir palabras a texto. Es útil para tomar notas por voz, activar dispositivos inteligentes (“Ok Google”, “Hey Siri”) y muchas otras aplicaciones.
Implementación#
En este ejemplo, utilizamos el corpus LibriSpeech ASR, que contiene aproximadamente 1000 horas de habla en inglés.
from datasets import load_dataset
dataset = load_dataset("librispeech_asr",split="train.clean.100",streaming=True,trust_remote_code=True)
Downloading builder script: 100%|██████████| 11.5k/11.5k [00:00<00:00, 6.94MB/s]
Downloading readme: 100%|██████████| 10.2k/10.2k [00:00<00:00, 11.7MB/s]
example = next(iter(dataset))
example
{'file': '374-180298-0000.flac',
'audio': {'path': '374-180298-0000.flac',
'array': array([ 7.01904297e-04, 7.32421875e-04, 7.32421875e-04, ...,
-2.74658203e-04, -1.83105469e-04, -3.05175781e-05]),
'sampling_rate': 16000},
'text': 'CHAPTER SIXTEEN I MIGHT HAVE TOLD YOU OF THE BEGINNING OF THIS LIAISON IN A FEW LINES BUT I WANTED YOU TO SEE EVERY STEP BY WHICH WE CAME I TO AGREE TO WHATEVER MARGUERITE WISHED',
'speaker_id': 374,
'chapter_id': 180298,
'id': '374-180298-0000'}
from IPython.display import Audio as IPythonAudio
IPythonAudio(example["audio"]["array"],rate=example["audio"]["sampling_rate"])
Usaremos el modelo Whisper (distil-whisper/distil-small.en) de OpenAI, diseñado para el reconocimiento de habla en inglés. Se trata de una versión reducida del modelo original.
Construyamos ahora nuestro pipeline de Hugging Face.
reco_parole = pipeline(task="automatic-speech-recognition",model="distil-whisper/distil-small.en")
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
print("Sampling rate du modèle : ",reco_parole.feature_extractor.sampling_rate)
print("Sampling rate de notre extrait : ",example['audio']['sampling_rate'])
Sampling rate du modèle : 16000
Sampling rate de notre extrait : 16000
Las tasas de muestreo son idénticas, por lo que no es necesario modificar nada.
output=reco_parole(example["audio"]["array"])
print("Texte transcrit : ",output['text'])
print("Texte de base : ",example["text"].lower())
Texte transcrit : Chapter 16 I might have told you of the beginning of this liaison in a few lines, but I wanted you to see every step by which we came. I too agree to whatever Marguerite wished.
Texte de base : chapter sixteen i might have told you of the beginning of this liaison in a few lines but i wanted you to see every step by which we came i to agree to whatever marguerite wished
Como puedes observar, la transcripción es bastante fiel al original.
Conversión de texto a voz#
Esta tarea es la inversa a la anterior. Aquí, proporcionamos un texto como entrada y el modelo genera un audio de una persona pronunciando dicho texto.
Implementación#
Utilizamos el modelo VITS de Kakao Enterprise (kakao-enterprise/vits-ljs).
text_parole = pipeline("text-to-speech", model="kakao-enterprise/vits-ljs")
Some weights of the model checkpoint at kakao-enterprise/vits-ljs were not used when initializing VitsModel: ['flow.flows.0.wavenet.in_layers.0.weight_g', 'flow.flows.0.wavenet.in_layers.0.weight_v', 'flow.flows.0.wavenet.in_layers.1.weight_g', 'flow.flows.0.wavenet.in_layers.1.weight_v', 'flow.flows.0.wavenet.in_layers.2.weight_g', 'flow.flows.0.wavenet.in_layers.2.weight_v', 'flow.flows.0.wavenet.in_layers.3.weight_g', 'flow.flows.0.wavenet.in_layers.3.weight_v', 'flow.flows.0.wavenet.res_skip_layers.0.weight_g', 'flow.flows.0.wavenet.res_skip_layers.0.weight_v', 'flow.flows.0.wavenet.res_skip_layers.1.weight_g', 'flow.flows.0.wavenet.res_skip_layers.1.weight_v', 'flow.flows.0.wavenet.res_skip_layers.2.weight_g', 'flow.flows.0.wavenet.res_skip_layers.2.weight_v', 'flow.flows.0.wavenet.res_skip_layers.3.weight_g', 'flow.flows.0.wavenet.res_skip_layers.3.weight_v', 'flow.flows.1.wavenet.in_layers.0.weight_g', 'flow.flows.1.wavenet.in_layers.0.weight_v', 'flow.flows.1.wavenet.in_layers.1.weight_g', 'flow.flows.1.wavenet.in_layers.1.weight_v', 'flow.flows.1.wavenet.in_layers.2.weight_g', 'flow.flows.1.wavenet.in_layers.2.weight_v', 'flow.flows.1.wavenet.in_layers.3.weight_g', 'flow.flows.1.wavenet.in_layers.3.weight_v', 'flow.flows.1.wavenet.res_skip_layers.0.weight_g', 'flow.flows.1.wavenet.res_skip_layers.0.weight_v', 'flow.flows.1.wavenet.res_skip_layers.1.weight_g', 'flow.flows.1.wavenet.res_skip_layers.1.weight_v', 'flow.flows.1.wavenet.res_skip_layers.2.weight_g', 'flow.flows.1.wavenet.res_skip_layers.2.weight_v', 'flow.flows.1.wavenet.res_skip_layers.3.weight_g', 'flow.flows.1.wavenet.res_skip_layers.3.weight_v', 'flow.flows.2.wavenet.in_layers.0.weight_g', 'flow.flows.2.wavenet.in_layers.0.weight_v', 'flow.flows.2.wavenet.in_layers.1.weight_g', 'flow.flows.2.wavenet.in_layers.1.weight_v', 'flow.flows.2.wavenet.in_layers.2.weight_g', 'flow.flows.2.wavenet.in_layers.2.weight_v', 'flow.flows.2.wavenet.in_layers.3.weight_g', 'flow.flows.2.wavenet.in_layers.3.weight_v', 'flow.flows.2.wavenet.res_skip_layers.0.weight_g', 'flow.flows.2.wavenet.res_skip_layers.0.weight_v', 'flow.flows.2.wavenet.res_skip_layers.1.weight_g', 'flow.flows.2.wavenet.res_skip_layers.1.weight_v', 'flow.flows.2.wavenet.res_skip_layers.2.weight_g', 'flow.flows.2.wavenet.res_skip_layers.2.weight_v', 'flow.flows.2.wavenet.res_skip_layers.3.weight_g', 'flow.flows.2.wavenet.res_skip_layers.3.weight_v', 'flow.flows.3.wavenet.in_layers.0.weight_g', 'flow.flows.3.wavenet.in_layers.0.weight_v', 'flow.flows.3.wavenet.in_layers.1.weight_g', 'flow.flows.3.wavenet.in_layers.1.weight_v', 'flow.flows.3.wavenet.in_layers.2.weight_g', 'flow.flows.3.wavenet.in_layers.2.weight_v', 'flow.flows.3.wavenet.in_layers.3.weight_g', 'flow.flows.3.wavenet.in_layers.3.weight_v', 'flow.flows.3.wavenet.res_skip_layers.0.weight_g', 'flow.flows.3.wavenet.res_skip_layers.0.weight_v', 'flow.flows.3.wavenet.res_skip_layers.1.weight_g', 'flow.flows.3.wavenet.res_skip_layers.1.weight_v', 'flow.flows.3.wavenet.res_skip_layers.2.weight_g', 'flow.flows.3.wavenet.res_skip_layers.2.weight_v', 'flow.flows.3.wavenet.res_skip_layers.3.weight_g', 'flow.flows.3.wavenet.res_skip_layers.3.weight_v', 'posterior_encoder.wavenet.in_layers.0.weight_g', 'posterior_encoder.wavenet.in_layers.0.weight_v', 'posterior_encoder.wavenet.in_layers.1.weight_g', 'posterior_encoder.wavenet.in_layers.1.weight_v', 'posterior_encoder.wavenet.in_layers.10.weight_g', 'posterior_encoder.wavenet.in_layers.10.weight_v', 'posterior_encoder.wavenet.in_layers.11.weight_g', 'posterior_encoder.wavenet.in_layers.11.weight_v', 'posterior_encoder.wavenet.in_layers.12.weight_g', 'posterior_encoder.wavenet.in_layers.12.weight_v', 'posterior_encoder.wavenet.in_layers.13.weight_g', 'posterior_encoder.wavenet.in_layers.13.weight_v', 'posterior_encoder.wavenet.in_layers.14.weight_g', 'posterior_encoder.wavenet.in_layers.14.weight_v', 'posterior_encoder.wavenet.in_layers.15.weight_g', 'posterior_encoder.wavenet.in_layers.15.weight_v', 'posterior_encoder.wavenet.in_layers.2.weight_g', 'posterior_encoder.wavenet.in_layers.2.weight_v', 'posterior_encoder.wavenet.in_layers.3.weight_g', 'posterior_encoder.wavenet.in_layers.3.weight_v', 'posterior_encoder.wavenet.in_layers.4.weight_g', 'posterior_encoder.wavenet.in_layers.4.weight_v', 'posterior_encoder.wavenet.in_layers.5.weight_g', 'posterior_encoder.wavenet.in_layers.5.weight_v', 'posterior_encoder.wavenet.in_layers.6.weight_g', 'posterior_encoder.wavenet.in_layers.6.weight_v', 'posterior_encoder.wavenet.in_layers.7.weight_g', 'posterior_encoder.wavenet.in_layers.7.weight_v', 'posterior_encoder.wavenet.in_layers.8.weight_g', 'posterior_encoder.wavenet.in_layers.8.weight_v', 'posterior_encoder.wavenet.in_layers.9.weight_g', 'posterior_encoder.wavenet.in_layers.9.weight_v', 'posterior_encoder.wavenet.res_skip_layers.0.weight_g', 'posterior_encoder.wavenet.res_skip_layers.0.weight_v', 'posterior_encoder.wavenet.res_skip_layers.1.weight_g', 'posterior_encoder.wavenet.res_skip_layers.1.weight_v', 'posterior_encoder.wavenet.res_skip_layers.10.weight_g', 'posterior_encoder.wavenet.res_skip_layers.10.weight_v', 'posterior_encoder.wavenet.res_skip_layers.11.weight_g', 'posterior_encoder.wavenet.res_skip_layers.11.weight_v', 'posterior_encoder.wavenet.res_skip_layers.12.weight_g', 'posterior_encoder.wavenet.res_skip_layers.12.weight_v', 'posterior_encoder.wavenet.res_skip_layers.13.weight_g', 'posterior_encoder.wavenet.res_skip_layers.13.weight_v', 'posterior_encoder.wavenet.res_skip_layers.14.weight_g', 'posterior_encoder.wavenet.res_skip_layers.14.weight_v', 'posterior_encoder.wavenet.res_skip_layers.15.weight_g', 'posterior_encoder.wavenet.res_skip_layers.15.weight_v', 'posterior_encoder.wavenet.res_skip_layers.2.weight_g', 'posterior_encoder.wavenet.res_skip_layers.2.weight_v', 'posterior_encoder.wavenet.res_skip_layers.3.weight_g', 'posterior_encoder.wavenet.res_skip_layers.3.weight_v', 'posterior_encoder.wavenet.res_skip_layers.4.weight_g', 'posterior_encoder.wavenet.res_skip_layers.4.weight_v', 'posterior_encoder.wavenet.res_skip_layers.5.weight_g', 'posterior_encoder.wavenet.res_skip_layers.5.weight_v', 'posterior_encoder.wavenet.res_skip_layers.6.weight_g', 'posterior_encoder.wavenet.res_skip_layers.6.weight_v', 'posterior_encoder.wavenet.res_skip_layers.7.weight_g', 'posterior_encoder.wavenet.res_skip_layers.7.weight_v', 'posterior_encoder.wavenet.res_skip_layers.8.weight_g', 'posterior_encoder.wavenet.res_skip_layers.8.weight_v', 'posterior_encoder.wavenet.res_skip_layers.9.weight_g', 'posterior_encoder.wavenet.res_skip_layers.9.weight_v']
- This IS expected if you are initializing VitsModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing VitsModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of VitsModel were not initialized from the model checkpoint at kakao-enterprise/vits-ljs and are newly initialized: ['flow.flows.0.wavenet.in_layers.0.parametrizations.weight.original0', 'flow.flows.0.wavenet.in_layers.0.parametrizations.weight.original1', 'flow.flows.0.wavenet.in_layers.1.parametrizations.weight.original0', 'flow.flows.0.wavenet.in_layers.1.parametrizations.weight.original1', 'flow.flows.0.wavenet.in_layers.2.parametrizations.weight.original0', 'flow.flows.0.wavenet.in_layers.2.parametrizations.weight.original1', 'flow.flows.0.wavenet.in_layers.3.parametrizations.weight.original0', 'flow.flows.0.wavenet.in_layers.3.parametrizations.weight.original1', 'flow.flows.0.wavenet.res_skip_layers.0.parametrizations.weight.original0', 'flow.flows.0.wavenet.res_skip_layers.0.parametrizations.weight.original1', 'flow.flows.0.wavenet.res_skip_layers.1.parametrizations.weight.original0', 'flow.flows.0.wavenet.res_skip_layers.1.parametrizations.weight.original1', 'flow.flows.0.wavenet.res_skip_layers.2.parametrizations.weight.original0', 'flow.flows.0.wavenet.res_skip_layers.2.parametrizations.weight.original1', 'flow.flows.0.wavenet.res_skip_layers.3.parametrizations.weight.original0', 'flow.flows.0.wavenet.res_skip_layers.3.parametrizations.weight.original1', 'flow.flows.1.wavenet.in_layers.0.parametrizations.weight.original0', 'flow.flows.1.wavenet.in_layers.0.parametrizations.weight.original1', 'flow.flows.1.wavenet.in_layers.1.parametrizations.weight.original0', 'flow.flows.1.wavenet.in_layers.1.parametrizations.weight.original1', 'flow.flows.1.wavenet.in_layers.2.parametrizations.weight.original0', 'flow.flows.1.wavenet.in_layers.2.parametrizations.weight.original1', 'flow.flows.1.wavenet.in_layers.3.parametrizations.weight.original0', 'flow.flows.1.wavenet.in_layers.3.parametrizations.weight.original1', 'flow.flows.1.wavenet.res_skip_layers.0.parametrizations.weight.original0', 'flow.flows.1.wavenet.res_skip_layers.0.parametrizations.weight.original1', 'flow.flows.1.wavenet.res_skip_layers.1.parametrizations.weight.original0', 'flow.flows.1.wavenet.res_skip_layers.1.parametrizations.weight.original1', 'flow.flows.1.wavenet.res_skip_layers.2.parametrizations.weight.original0', 'flow.flows.1.wavenet.res_skip_layers.2.parametrizations.weight.original1', 'flow.flows.1.wavenet.res_skip_layers.3.parametrizations.weight.original0', 'flow.flows.1.wavenet.res_skip_layers.3.parametrizations.weight.original1', 'flow.flows.2.wavenet.in_layers.0.parametrizations.weight.original0', 'flow.flows.2.wavenet.in_layers.0.parametrizations.weight.original1', 'flow.flows.2.wavenet.in_layers.1.parametrizations.weight.original0', 'flow.flows.2.wavenet.in_layers.1.parametrizations.weight.original1', 'flow.flows.2.wavenet.in_layers.2.parametrizations.weight.original0', 'flow.flows.2.wavenet.in_layers.2.parametrizations.weight.original1', 'flow.flows.2.wavenet.in_layers.3.parametrizations.weight.original0', 'flow.flows.2.wavenet.in_layers.3.parametrizations.weight.original1', 'flow.flows.2.wavenet.res_skip_layers.0.parametrizations.weight.original0', 'flow.flows.2.wavenet.res_skip_layers.0.parametrizations.weight.original1', 'flow.flows.2.wavenet.res_skip_layers.1.parametrizations.weight.original0', 'flow.flows.2.wavenet.res_skip_layers.1.parametrizations.weight.original1', 'flow.flows.2.wavenet.res_skip_layers.2.parametrizations.weight.original0', 'flow.flows.2.wavenet.res_skip_layers.2.parametrizations.weight.original1', 'flow.flows.2.wavenet.res_skip_layers.3.parametrizations.weight.original0', 'flow.flows.2.wavenet.res_skip_layers.3.parametrizations.weight.original1', 'flow.flows.3.wavenet.in_layers.0.parametrizations.weight.original0', 'flow.flows.3.wavenet.in_layers.0.parametrizations.weight.original1', 'flow.flows.3.wavenet.in_layers.1.parametrizations.weight.original0', 'flow.flows.3.wavenet.in_layers.1.parametrizations.weight.original1', 'flow.flows.3.wavenet.in_layers.2.parametrizations.weight.original0', 'flow.flows.3.wavenet.in_layers.2.parametrizations.weight.original1', 'flow.flows.3.wavenet.in_layers.3.parametrizations.weight.original0', 'flow.flows.3.wavenet.in_layers.3.parametrizations.weight.original1', 'flow.flows.3.wavenet.res_skip_layers.0.parametrizations.weight.original0', 'flow.flows.3.wavenet.res_skip_layers.0.parametrizations.weight.original1', 'flow.flows.3.wavenet.res_skip_layers.1.parametrizations.weight.original0', 'flow.flows.3.wavenet.res_skip_layers.1.parametrizations.weight.original1', 'flow.flows.3.wavenet.res_skip_layers.2.parametrizations.weight.original0', 'flow.flows.3.wavenet.res_skip_layers.2.parametrizations.weight.original1', 'flow.flows.3.wavenet.res_skip_layers.3.parametrizations.weight.original0', 'flow.flows.3.wavenet.res_skip_layers.3.parametrizations.weight.original1', 'posterior_encoder.wavenet.in_layers.0.parametrizations.weight.original0', 'posterior_encoder.wavenet.in_layers.0.parametrizations.weight.original1', 'posterior_encoder.wavenet.in_layers.1.parametrizations.weight.original0', 'posterior_encoder.wavenet.in_layers.1.parametrizations.weight.original1', 'posterior_encoder.wavenet.in_layers.10.parametrizations.weight.original0', 'posterior_encoder.wavenet.in_layers.10.parametrizations.weight.original1', 'posterior_encoder.wavenet.in_layers.11.parametrizations.weight.original0', 'posterior_encoder.wavenet.in_layers.11.parametrizations.weight.original1', 'posterior_encoder.wavenet.in_layers.12.parametrizations.weight.original0', 'posterior_encoder.wavenet.in_layers.12.parametrizations.weight.original1', 'posterior_encoder.wavenet.in_layers.13.parametrizations.weight.original0', 'posterior_encoder.wavenet.in_layers.13.parametrizations.weight.original1', 'posterior_encoder.wavenet.in_layers.14.parametrizations.weight.original0', 'posterior_encoder.wavenet.in_layers.14.parametrizations.weight.original1', 'posterior_encoder.wavenet.in_layers.15.parametrizations.weight.original0', 'posterior_encoder.wavenet.in_layers.15.parametrizations.weight.original1', 'posterior_encoder.wavenet.in_layers.2.parametrizations.weight.original0', 'posterior_encoder.wavenet.in_layers.2.parametrizations.weight.original1', 'posterior_encoder.wavenet.in_layers.3.parametrizations.weight.original0', 'posterior_encoder.wavenet.in_layers.3.parametrizations.weight.original1', 'posterior_encoder.wavenet.in_layers.4.parametrizations.weight.original0', 'posterior_encoder.wavenet.in_layers.4.parametrizations.weight.original1', 'posterior_encoder.wavenet.in_layers.5.parametrizations.weight.original0', 'posterior_encoder.wavenet.in_layers.5.parametrizations.weight.original1', 'posterior_encoder.wavenet.in_layers.6.parametrizations.weight.original0', 'posterior_encoder.wavenet.in_layers.6.parametrizations.weight.original1', 'posterior_encoder.wavenet.in_layers.7.parametrizations.weight.original0', 'posterior_encoder.wavenet.in_layers.7.parametrizations.weight.original1', 'posterior_encoder.wavenet.in_layers.8.parametrizations.weight.original0', 'posterior_encoder.wavenet.in_layers.8.parametrizations.weight.original1', 'posterior_encoder.wavenet.in_layers.9.parametrizations.weight.original0', 'posterior_encoder.wavenet.in_layers.9.parametrizations.weight.original1', 'posterior_encoder.wavenet.res_skip_layers.0.parametrizations.weight.original0', 'posterior_encoder.wavenet.res_skip_layers.0.parametrizations.weight.original1', 'posterior_encoder.wavenet.res_skip_layers.1.parametrizations.weight.original0', 'posterior_encoder.wavenet.res_skip_layers.1.parametrizations.weight.original1', 'posterior_encoder.wavenet.res_skip_layers.10.parametrizations.weight.original0', 'posterior_encoder.wavenet.res_skip_layers.10.parametrizations.weight.original1', 'posterior_encoder.wavenet.res_skip_layers.11.parametrizations.weight.original0', 'posterior_encoder.wavenet.res_skip_layers.11.parametrizations.weight.original1', 'posterior_encoder.wavenet.res_skip_layers.12.parametrizations.weight.original0', 'posterior_encoder.wavenet.res_skip_layers.12.parametrizations.weight.original1', 'posterior_encoder.wavenet.res_skip_layers.13.parametrizations.weight.original0', 'posterior_encoder.wavenet.res_skip_layers.13.parametrizations.weight.original1', 'posterior_encoder.wavenet.res_skip_layers.14.parametrizations.weight.original0', 'posterior_encoder.wavenet.res_skip_layers.14.parametrizations.weight.original1', 'posterior_encoder.wavenet.res_skip_layers.15.parametrizations.weight.original0', 'posterior_encoder.wavenet.res_skip_layers.15.parametrizations.weight.original1', 'posterior_encoder.wavenet.res_skip_layers.2.parametrizations.weight.original0', 'posterior_encoder.wavenet.res_skip_layers.2.parametrizations.weight.original1', 'posterior_encoder.wavenet.res_skip_layers.3.parametrizations.weight.original0', 'posterior_encoder.wavenet.res_skip_layers.3.parametrizations.weight.original1', 'posterior_encoder.wavenet.res_skip_layers.4.parametrizations.weight.original0', 'posterior_encoder.wavenet.res_skip_layers.4.parametrizations.weight.original1', 'posterior_encoder.wavenet.res_skip_layers.5.parametrizations.weight.original0', 'posterior_encoder.wavenet.res_skip_layers.5.parametrizations.weight.original1', 'posterior_encoder.wavenet.res_skip_layers.6.parametrizations.weight.original0', 'posterior_encoder.wavenet.res_skip_layers.6.parametrizations.weight.original1', 'posterior_encoder.wavenet.res_skip_layers.7.parametrizations.weight.original0', 'posterior_encoder.wavenet.res_skip_layers.7.parametrizations.weight.original1', 'posterior_encoder.wavenet.res_skip_layers.8.parametrizations.weight.original0', 'posterior_encoder.wavenet.res_skip_layers.8.parametrizations.weight.original1', 'posterior_encoder.wavenet.res_skip_layers.9.parametrizations.weight.original0', 'posterior_encoder.wavenet.res_skip_layers.9.parametrizations.weight.original1']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Intentemos generar una frase en francés.
text = """Ce cours de deep learning est incroyable."""
generated_parole = text_parole(text)
IPythonAudio(generated_parole["audio"][0],rate=generated_parole["sampling_rate"])
Como puedes observar, el resultado no es satisfactorio porque el modelo está entrenado con datos en inglés. Si deseas generar audio en francés, deberás buscar un modelo adecuado. Probemos ahora con una frase en inglés:
text = """This deep learning course is fantastic."""
generated_parole = text_parole(text)
IPythonAudio(generated_parole["audio"][0],rate=generated_parole["sampling_rate"])
¡Es mucho mejor!
Nota 1: También puedes combinar varios modelos. Por ejemplo, tomas tu frase en francés, la traduces al inglés y luego generas el audio correspondiente.
Nota 2: Si deseas generar sonidos (música, ambientes, ruidos, etc.) que no sean voz, debes explorar la categoría Text-to-Audio en Hugging Face.