Sompyler has not, and will never have a graphical interface on its own. This is explicitly left to commercial app developpers providing their own interfaces that use Sompyler in the backend via YAML text pipes. They may also use Neusician some day, another project of mine, that will provide a REST interface to its internal Sompyler score/instrument structure database from which the YAML can be generated and finally fed to Sompyler.
No such third-party graphical interfaces have been developped as yet, and I do not really care. I will care that, if license-compliant, the tools will be listed here in the order of notification date.
There is a bash script that tries to load user's Bash-compatible
$SHELL initialized with measures fit to Sompyler work.
To edit text, which is the main manner of telling Sompyler what to output, please use your favourite text editor.
To play audio files, use your favourite audio player. You can also import them in your favourite graphical DAW for further processing, as I guess nothing generated out of the box will satisfy public expectations.
scripts/init-env.bash is executable. It includes
~/.bashrc initialization files. It then activates the virtual environment of Python, in which you can install the manageable number of packages Sompyler depends on. See README file. Do not forget to link
$PROJECT_BASE/scripts/batch-sompyler.sh. That shell script can be also copied there, too, and feel free to adapt it to your needs.
But let's come to the examples ...
- Shell prompt is not included in the code sections, so you can paste text in total.
cat > FILE <<"EOF"means, enter following lines to create the designated file, and close it by EOF string. You might instead prefer to use an editor like vim.
Diphthongs and wandering formants
In a thread of a german sound-synthesis and synthetic music board, there was a discussion between a guy and me about formants wandering through the spectrum. What I understood is that he argued that to realize diphthongs in a language for instance, frequency-varying partials are needed. Nonsense I thought, but talked to walls, maybe we just got our wires crossed. In the end I made sound examples and a diagram to convince at least those who were still interested in the topic.
In a nut-shell: Naturalistic wandering formants are best realized with frequency-static partials that grow and fall with different delays on the time-axis applied to one direction or the other on the frequency axis.
Generate and listen to the sounds, and look the diagram, to understand the exorbitant difference to the ears. Conclusion: Wandering formants need a width on the time axis so the hearing can integrate the continual spectral changes into the sound.
mkdir lib/instruments/test; cd $_ cat > multidrop.spli <<"EOF" VOLUMES: 20:100;1,100 A: 0.03:1,1;2,1 S: 1:1;1,1 R: 0.2:1;1,0 PROFILE: - match: 1 V: 0 - match: 10 S: 1:10;6,10;7,13;8,13!;9,13;10,10 - match: 19 S: 1:10;1,13;2,13!;3,13;4,10;10,10 - match: 20 V: -20 S: true EOF cat > single-fv.spli <<"EOF" VOLUMES: 20:80;1,80 PROFILE: - match: 1 V: 0 - match: 17 FV: false - match: 18 FV: 0-5;0,0;1,15 V: 20 - match: 19 FV: false - match: 20 V: 0 EOF r single-fv.spli -l 1 -f 300 --sound /tmp/sound.wav r multidrop.spli -l 1 -f 300 --sound /tmp/sound2.wav r multidrop.spli -l 1 -f 300 --outline --output /tmp/sound2-outline.png