some more job offers this november: 1) Job vacancy in Acoustic - TopicsExpress



          

some more job offers this november: 1) Job vacancy in Acoustic Engineering, Carbon Air, UK 2) Research Assistants on the research project FAST, Queen Mary University, London, UK 3) Funded PhD in machine learing applied to sound synthesis and content creation, Queen Mary University, London, UK 4) Real-time audio developer (Max/MSP & C++), IRCAM, FR See below for links and further information. 1) https://plus.google/110378371746496014469/posts/iZpnEgfpw4e 2) The Centre for Digital Music at Queen Mary University of London is seeking to appoint several Research Assistants (both full-time and part-time) on the research project FAST (Fusing Audio and Semantic Technologies for Intelligent Music Production and Consumption). Funded by EPSRC, FAST is a collaboration of 3 UK universities with 5 industrial partners, including partners in Germany and USA. The project runs over 5 years, with a total budget over £5M. Applicants are expected to have expertise in one or more of: music/audio signal processing, machine learning, Ontologies, Semantic Web, Artificial Intelligence, Logic and Inference, Natural Language Processing, User interfaces and User Experience, Music Informatics, or Music Information Retrieval. The successful candidate must also have excellent programming skills in suitable high level languages, such as MATLAB, C++, Java, Python, Prolog, and it would be an advantage to have significant knowledge of several of the following: recording studio practice, Digital Audio Workstations, Asset Management Systems, Archiving, (Digital) Audio Effects, harmonic analysis and harmonic modelling of audio, source separation, sinusoidal modelling, metadata, RDF, audio standards and the standardisation process (MPEG, AES, W3C, EBU), music theory, symbolic music representation, perception and music psychology, similarity, recommendation engines, (music) big data, cloud computing and virtualisation. The posts are available immediately for 24 months but may be extended beyond their current end date subject to further funding approval. Candidates must be able to demonstrate their eligibility to work in the UK in accordance with the Immigration, Asylum and Nationality Act 2006. Where required this may include entry clearance or continued leave to remain under the Points Based Immigration Scheme. Informal enquiries should be addressed to Prof Mark Sandler at [email protected]. Details about the School can be found at eecs.qmul.ac.uk To apply, please click the jobs.qmul.ac.uk/5298&s=electronic%20engineering%20and%20computer%20science for full time positions, or jobs.qmul.ac.uk/5299&s=electronic%20engineering%20and%20computer%20science for part-time positions The closing date for applications is Monday 5 January 2015 and interviews are expected to be held in shortly afterwards. 3) Dear all, A funded PhD place is available to work within the Centre for Digital Music on the subject of machine learning applied to sound synthesis and content creation. Description below, and full details (including how to apply) at eecs.qmul.ac.uk/phd/research-topics/funded Please feel free to distribute this to anyone who might be interested. Thanks. Dr. Josh Reiss Reader in Audio Engineering Centre for Digital Music Queen Mary University of London Fully-funded PhD studentship: Machine learning applied to sound synthesis and media content creation Applications are invited from all nationalities for a funded PhD Studentship starting January 2015 within the Centre for Digital Music (C4DM) at Queen Mary University of London, to perform cutting-edge research in machine learning applied to sound synthesis and content creation. In this PhD project, the concept of an Intelligent Assistant is investigated as a means of short form media content creation. A small high-tech company are in the process of creating a collaborative cloud platform for the creation of short form media, such as advertisements, promotional videos, local information etc. The Intelligent Assistant would identify and organise the content, add effects and synthesised sounds where necessary and present the produced content as a coherent story. It will be used as a tool by content creators to assist in quick and intuitive content creation. The goal of this project is to create and assess such tools, focusing on the challenges of varied, user-generated content with limited metadata, and the need for an enhanced user experience. Research questions to be investigated include; - How best can sounds be synthesised in order to provide additional audio content to enhance the production? - Can multimedia (especially audio) content be intelligently combined to effectively tell a story? - How can this be assessed and evaluated? What are the key factors, features and metrics for intelligent storyboard systems? This project is expected to generate high impact results, especially in the growing research fields of signal processing, sound synthesis, music informatics and semantic tools for content creation and production. There is scope to tailor the project to the interests and skills of the successful candidate. Informal enquiries can be made by email to Dr. Josh Reiss: [email protected] More details, including how to apply, can be found at: eecs.qmul.ac.uk/phd/research-topics/funded Closing date is Dec. 16, 2014, and interviews are expected to take place during the week of 5th January 2015. 4) Dear all, We at IRCAM are looking for a real-time audio developer (Max/MSP & C++) who’s up for something a little different: starting a revolution in the world of brain science! This is a fixed-term, 2-year contract based in Paris. Description below, and at forumnet.ircam.fr/fr/user-groups/general/forum/topic/job-offer-maxmsp-developer-ircam/ Best regards, JJ Aucouturier CNRS Researcher (Cognitive Science) IRCAM (Paris, France) -- *Position*: Real-time audio developer (fixed term) *Place*: IRCAM (STMS, UMR9912), in central Paris (France) *Duration*: 2 years *Contact*: JJ Aucouturier (CNRS) aucouturier _at_ gmail The context for this position is ERC Starting Grant project CREAM (“Cracking the Emotional Code of Music”) lead by PI Jean-Julien Aucouturier. The developer will be based at the STMS laboratory (Science and Technology of Music and Sound, UMR9912) in IRCAM, in central Paris ( ircam.fr). The developer will work under the scientific supervision of JJ Aucouturier (CNRS), as part of IRCAM’s “Perception and Sound Design” team (dir: Patrick Susini). *Scientific context and objectives:* We’re a small group of computer scientists and physicists on a mission to crack a complicated brain science problem (“how does music create emotions”), with a hacker/DIY mentality, the drive to learn all the biology that’s needed on the way, and the ambition to become references in the field within 5 years. A recent theory suggests that music may create emotions by imitating the emotional expression involved in spoken language: music’s trembling notes, hesitating phrases, bright or dark timbres may well be “heard” and processed by the brain “as if” they were emotional speech (Juslin & Laukka, 2003). However, the cognitive neuroscience community does not have the audio signal processing tools and ability that would be necessary to test this hypothesis. As Max/MSP developer in the team, your role will be to develop a series of Max/MSP tools able to transform certain voice and music characteristics in real-time. These tools will be developed in collaboration with the other researchers in the team, and will used to conduct experimental neuroscience studies. *Work description:* The real-time audio manipulations will be specified from our recent pilot studies using professional audio hardware (VoicePro, TC Helicon; see Aucouturier et al., 2014). Precisely, your task will be to emulate in Max/MSP some of the functionalities available in hardware: pitch modification (vibrato, inflection, pitch shifting), format shifting, filtering (high-pass, low-pass) and dynamic compression. The software tools will need to meet strict real-time constraints, with maximum latency 15-20ms. As a first step, you will build a prototype using tools already available in the Max/MSP community, in IRCAM and elsewhere (SuperVP, Trax, PSOLA, etc.). Then, you will extend these existing tools by porting in Max/MSP more recent functionalities (e.g. those developed in the Analysis/Synthesis team in IRCAM) that may prove necessary for the neuroscience studies conducted in the team. Finally, you will help disseminating these new tools in the neuroscience community, e.g. by supporting other laboratories who’d wish to use them in their own work. At IRCAM, you will work in close interaction with a signal processing doctoral student tasked to use and test your developed tools. Besides, you will be part of a larger team composed of at least one other doctoral student and postdoctoral researcher. *Ideal background:* The ideal person for this position is a software developer specialized in real-time audio signal processing. He/she should hold a Master or Doctorate in computer science/signal processing (or equivalent work experience) and demonstrate excellent audio programming skills in Max/MSP and C++. He/she should have experience developing Max/MSP objects, audio acquisition hardware/pilots and have good knowledge of audio processing algorithms (PSOLA, Phase Vocoder, etc.) *Environnement*: IRCAM was created by composer Pierre Boulez in 1977; it is now the world’s largest R&D institute in computer music, as well as an important art centre for contemporary music (ircam.fr). IRCAM is ideally located in central Paris, just opposite the Centre Pompidou modern art museum. Fully equipped with psychoacoustics and neuroscience experimentation booths, IRCAM’s Perception and Sound Design (PDS) team (pds.ircam.fr) is the only research unit in the institute devoted to cognition and experimental science of sound and music. Taking roots in the seminal studies of music timbre perception by D. Wessel and S. McAdams, work in the PDS team now encompasses topics as varied as sonic environmental quality (for which we won the Ministry of Environment’s Décibel d’or 2014 award), sound design (we designed sound for the new Renault electric cars) and music neuroscience. *Duration*: 2 years (can be discussed) *Salary*: c. 2000€ monthly, according to work experience and academic degrees (This is a CNRS ingénieur d’étude/ingénieur de recherche-level position) *Applying*: Candidates should send a detailed CV, cover letter and online portfolio of software developments (github, maxobjects, etc.) by email to Jean-Julien Aucouturier (aucouturier@gmail) before December 1st, 2014. Candidate interviews will be held in Paris in December 2014, for a starting date as early as January 2015. Aucouturier, J.J., Johansson, P., Segnini, R., Mercadié, L., Hall, L. & Watanabe, K. (2014) Covert digital manipulations of vocal emotion alter the speaker’s emotional state in a congruent direction (available on request). Juslin, P. N., & Laukka, P. (2003). Communication of emotions in vocal expression and music performance: Different channels, same code?. Psychological bulletin, 129(5), 770
Posted on: Wed, 26 Nov 2014 20:00:14 +0000

Trending Topics



Recently Viewed Topics




© 2015