Google cloud npm package text to speech the FEMALE voice not working - google-cloud-api-gateway

Google cloud npm package text to speech the FEMALE voice not working but its always come with NEUTRAL voice
```
export async function quickStart(text) { const request = { input: { text: text }, // Select the language and SSML voice gender (optional) voice: { languageCode: 'en-US'

Related

Chrome manifest V3 extensions and externally_connectable documentation

After preparing the migration of my chrome manifest V2 extension to manifest V3 and reading about the problems with persistent service workers I prepared myself for a battle with the unknown. My V2 background script uses a whole bunch of globally declared variables and I expected I need to refactor that.
But to my great surprise my extension background script seems to work out of the box without any trouble in manifest V3. My extension uses externally_connectable. The typical use case for my extension is that the user can navigate to my website 'bla.com' and from there it can send jobs to the extension background script.
My manifest says:
"externally_connectable": {
"matches": [
"*://localhost/*",
"https://*.bla.com/*"
]
}
My background script listens to external messages and connects:
chrome.runtime.onMessageExternal.addListener( (message, sender, sendResponse) => {
log('received external message', message);
});
chrome.runtime.onConnectExternal.addListener(function(port) {
messageExternalPort = port;
if (messageExternalPort && typeof messageExternalPort.onDisconnect === 'function') {
messageExternalPort.onDisconnect(function () {
messageExternalPort = null;
})
}
});
From bla.com I send messages to the extension as follows
chrome.runtime.sendMessage(EXTENSION_ID, { type: "collect" });
From bla.com I receive messages from the extension as follows
const setUpExtensionListener = () => {
// Connect to chrome extension
this.port = chrome.runtime.connect(EXTENSION_ID, { name: 'query' });
// Add listener
this.port.onMessage.addListener(handleExtensionMessage);
}
I tested all scenarios including the anticipation of the famous service worker unload after 5 minutes or 30 seconds inactivity, but it all seems to work. Good for me, but something is itchy. I cannot find any documentation that explains precisely under which circumstances the service worker is unloaded. I do not understand why things seem to work out of the box in my situation and why so many others experience problems. Can anybody explain or refer to proper documentation. Thanks in advance.

Autodesk Forge Design Automation - Error Opening a model - How to bypass Dialog Box "Model has been transmitted from remote location"

I'm trying to use the Design Automation api to open a Revit model from our BIM360 account, eg. to upgrade it from a previous version of Revit
When I test locally, some rvt files display a dialog box during opening:
Transmitted File - this file has been transmitted from a remote location - see image attached (this is a side effect from being downloaded from BIM360)
dialog box on file open Transmitted file
my question is - how can I bypass this dialog box so that the addin can work with Design Aurtomation (in which no UI, dialogs or warnings are supported)
I did some research on Jeremy T's posts on this issue, and found some information, about how to use the DialogBoxShowing event to catch and respond to dialog boxes before they appear..
https://thebuildingcoder.typepad.com/blog/2009/06/autoconfirm-save-using-dialogboxshowing-event.html
However, the problem is that this event is part of the UIApplication namespace, so is likely not available in the Design Automation cloud Revit engine
https://www.revitapidocs.com/2017/cb46ea4c-2b80-0ec2-063f-dda6f662948a.htm
Also in any case it appears that this particular event is not fired when a model is opened
https://forums.autodesk.com/t5/revit-api-forum/dialogboxshowing-event-not-firing-when-model-is-opened/td-p/5578594
Any ideas about how I can open transmitted models for processing with Design Automation?
Thanks!
Ed G
The file from BIM 360 is a eTransmitted workshared files. To open such a file in DesignAutomation for Revit, you will need to use OpenOptions (DetachAndPreserveWorksets or DetachAndDiscardWorksets). If you preserve the worksets and would like to save the file, remember to use the correct SaveAsOptions.
Explicitly specify a local name to your input file in your activity:
{
"alias": "prod",
"activity": {
"id": "YourActivity",
"commandLine": [ "$(engine.path)\\\\revitcoreconsole.exe /al $(appbundles[YourBundle].path)" ],
"parameters": {
"rvtFile": {
"zip": false,
"ondemand": false,
"verb": "get",
"description": "Input Revit model",
"required": true,
"localName": "input.rvt",
}
},
"engine": "Autodesk.Revit+2020",
"appbundles": [ "YourName.YourBundle+label" ],
"description": "Bundle description."
}
}
In your app bundle open the file input file "input.rvt" using OpenOptions DetachAndPreserveWorksets or DetachAndDiscardWorksets.
ModelPath path = ModelPathUtils.ConvertUserVisiblePathToModelPath("input.rvt");
var opts = new OpenOptions
{
DetachFromCentralOption = DetachFromCentralOption.DetachAndPreserveWorksets
};
var document = application.OpenDocumentFile(path, opts);
This was covered in the my AU class (see video at 37 min mark).

speechSynthesis.getVoices() is empty array in Chromium Fedora

Is Speech Synthesis API supported by Chromium? Do I need to install voices? If so how can I do that? I'm using Fedora. Is voices like video that I need to install extra package for it to work?
I've tried this code:
var msg = new SpeechSynthesisUtterance('I see dead people!');
msg.voice = speechSynthesis.getVoices().filter(function(voice) {
return voice.name == 'Whisper';
})[0];
speechSynthesis.speak(msg);
from article Web apps that talk - Introduction to the Speech Synthesis API
but the function speechSynthesis.getVoices() return empty array.
I've also tried:
window.speechSynthesis.onvoiceschanged = function() {
console.log(window.speechSynthesis.getVoices())
};
the function get executed but the array is also empty.
On https://fedoraproject.org/wiki/Chromium there is info to use --enable-speech-dispatcher flag but when I've use it I've got warning that flag is not supported.
Is Speech Synthesis API supported by Chromium?
Yes, the Web Speech API has basic support at Chromium browser, though there are several issues with both Chromium and Firefox implementation of the specification, see see Blink>Speech, Internals>SpeechSynthesis, Web Speech.
Do I need to install voices? If so how can I do that? I'm using
Fedora. Is voices like video that I need to install extra package for
it to work?
Yes, voices need to be installed. Chromium is not shipped with voices to set at SpeechSynthesisUtterance voice attribute by default, see How to use Web Speech API at chromium?; How to capture generated audio from window.speechSynthesis.speak() call?.
You can install speech-dispatcher as a server for the system speech synthesis server and espeak as the speech synthesizer.
$ yum install speech-dispatcher espeak
You can also set a configuration file for speech-dispatcher in the user home folder to set specific options for both speech-dispatcher and the output module that your use, for example espeak
$ spd-conf -u
Launching Chromium with --enable-speech-dispatcher flag automatically spawns a connection to speech-dispatcher, where you can set the LogLevel between 0 and 5 to review SSIP communication between Chromium code and speech-dispatcher.
.getVoices() returns results asynchronously and needs to be called twice
see this electron issue at GitHub Speech Synthesis: No Voices #586.
window.speechSynthesis.onvoiceschanged = e => {
const voices = window.speechSynthesis.getVoices();
// do speech synthesis stuff
console.log(voices);
}
window.speechSynthesis.getVoices();
or composed as an asynchronous function which returns a Promise with value being array of voices
(async() => {
const getVoices = (voiceName = "") => {
return new Promise(resolve => {
window.speechSynthesis.onvoiceschanged = e => {
// optionally filter returned voice by `voiceName`
// resolve(
// window.speechSynthesis.getVoices()
// .filter(({name}) => /^en.+whisper/.test(name))
// );
resolve(window.speechSynthesis.getVoices());
}
window.speechSynthesis.getVoices();
})
}
const voices = await getVoices();
console.log(voices);
})();

Get Local IP of a device in chrome extension

I am working on a chrome extension which is supposed to discover and then communicate with other devices in a local network. To discover them it needs to find out its own IP-address to find out the IP-range of the network to check for other devices. I am stuck on how to find the IP-address of the local machine (I am not talking about the localhost nor am I talking about the address that is exposed to the internet but the address on the local network). Basically, what I would love is to get what would be the ifconfig output in the terminal inside my background.js.
The Chrome Apps API offers chrome.socket which seems to be able to do this, however, it is not available for extensions. Reading through the API for extensions I have not found anything that seems to enable me to find the local ip.
Am I missing something or is this impossible for some reason? Is there any other way to discover other devices on the network, that would also do just fine (as they would be on the same IP-range) but while there are some rumors from December of 2012 that there could be a discovery API for extensions nothing seems to exist yet.
Does anybody have any ideas?
You can get a list of your local IP addresses (more precisely: The IP addresses of your local network interfaces) via the WebRTC API. This API can be used by any web application (not just Chrome extensions).
Example:
// Example (using the function below).
getLocalIPs(function(ips) { // <!-- ips is an array of local IP addresses.
document.body.textContent = 'Local IP addresses:\n ' + ips.join('\n ');
});
function getLocalIPs(callback) {
var ips = [];
var RTCPeerConnection = window.RTCPeerConnection ||
window.webkitRTCPeerConnection || window.mozRTCPeerConnection;
var pc = new RTCPeerConnection({
// Don't specify any stun/turn servers, otherwise you will
// also find your public IP addresses.
iceServers: []
});
// Add a media line, this is needed to activate candidate gathering.
pc.createDataChannel('');
// onicecandidate is triggered whenever a candidate has been found.
pc.onicecandidate = function(e) {
if (!e.candidate) { // Candidate gathering completed.
pc.close();
callback(ips);
return;
}
var ip = /^candidate:.+ (\S+) \d+ typ/.exec(e.candidate.candidate)[1];
if (ips.indexOf(ip) == -1) // avoid duplicate entries (tcp/udp)
ips.push(ip);
};
pc.createOffer(function(sdp) {
pc.setLocalDescription(sdp);
}, function onerror() {});
}
<body style="white-space:pre"> IP addresses will be printed here... </body>
After some searching I've found that similar question was answered before.
This API is inaccessible from extension, but available for chrome apps:
use chrome.system.network.getNetworkInterfaces.
This will return an array of all interfaces with their IP address.
This is my sample code:
chrome.system.network.getNetworkInterfaces(function(interfaces){
console.log(interfaces);
});
manifest-permissions:
"permissions": [ "system.network" ], ...
> chrome.system.network.getNetworkInterfaces(function(interfaces){
> console.log(interfaces); });
manifest-permissions:
"permissions":
[ "system.network" ], ...
Works for me too and it replies:
(4) [{…}, {…}, {…}, {…}]
0
:
{address: "xxxx", name: "en0", prefixLength: 64}
1
:
{address: "192.168.86.100", name: "en0", prefixLength: 24}
2
:
{address: "xxxx", name: "awdl0", prefixLength: 64}
3
:
{address: "xxxx", name: "utun0", prefixLength: 64}
length
:
4
see http://developer.chrome.com/extensions/webRequest.html for detail,
my code example:
// get IP using webRequest
var currentIPList = {};
chrome.webRequest.onCompleted.addListener(
function(info) {
currentIPList[info.url] = info.ip;
currentIPList[info.tabId] = currentIPList[info.tabId] || [];
currentIPList[info.tabId].push(info);
return;
},
{
urls: [],
types: []
},
[]
);

How do I place a "Share to Facebook" Button in my Native Android App Built with Air for Android in Flash CS5.5?

I am new to this but we have created a new Android Native App using Air for Android in Flash CS 5.5. We have a working App and want to on the last screen, add a "Share to Facebook" button that will share the user's results to their Facebook wall / timeline. I have searched and searched but all I can find is either written in Java, not Actionscript 3, or is to update a status directly from the app. We only want to send a predefined snippet (User Results) from our app. Can someone please point me in the right direction??
There's an API for this, with variants for AIR desktop, AIR mobile, and web applications. It began life being officially supported by both Facebook and Adobe, although I'm not sure if it still is:
http://code.google.com/p/facebook-actionscript-api/
There is plenty of documentation on how to use it. You need to create an app on facebook, just so you can identify yourself properly. Add the facebook developers app to your FB account and then see here:
https://developers.facebook.com/
The basics of a share to wall call will look something like this:
import com.facebook.graph.data.FacebookAuthResponse;
import com.facebook.graph.Facebook;
function initializeFacebook()
{
Facebook.init("YOUR APP ID", onFacebookInit);
}
var interval:int = -1;
function onFacebookInit(success:Object, fail:Object):void
{
if (success)
{
post();
}
else
{
var opts:Object = { scope:"publish_stream" };
Facebook.login(null, opts);
// In theory you should get a post back when logged in.
// In practice this often fails, and it's better to poll for an access token.
interval = setInterval(pollForLoggedIn, 500);
}
}
function pollForLoggedIn():void
{
if (Facebook.getAuthResponse().accessToken)
{
clearInterval(interval);
post();
}
}
function post():void
{
var auth:FacebookAuthResponse = Facebook.getAuthResponse();
var token:String = auth.accessToken;
var user:String = auth.uid;
var values:Object =
{
access_token: token,
name: "TITLE OF YOUR POST",
picture: "OPTIONAL THUMBNAIL PATH",
link:"OPTIONAL LINK",
description: "OPTIONAL APP DESCRIPTION",
message: "BODY OF POST"
};
Facebook.api("/" + user + "/feed", handleSubmitFeed, values, URLRequestMethod.POST);
}
function handleSubmitFeed(success:Object, fail:Object):void
{
// Done.
}