GCE how to add external IP to existing instance at boot - google-compute-engine

I'm using Gcloud-java to manage some VM instances. The code to create a new instance is clear and is the following:
Address externalIp = compute.getAddress(addressId);
InstanceId instanceId = InstanceId.of("us-central1-a", "test-instance");
NetworkId networkId = NetworkId.of("default");
PersistentDiskConfiguration attachConfiguration =
PersistentDiskConfiguration.builder(diskId).boot(true).build();
AttachedDisk attachedDisk = AttachedDisk.of("dev0", attachConfiguration);
NetworkInterface networkInterface = NetworkInterface.builder(networkId)
.accessConfigurations(AccessConfig.of(externalIp.address()))
.build();
MachineTypeId machineTypeId = MachineTypeId.of("us-central1-a", "n1-standard-1");
InstanceInfo instance =
InstanceInfo.of(instanceId, machineTypeId, attachedDisk, networkInterface);
Operation operation = compute.create(instance);
// Wait for operation to complete
operation = operation.waitFor();
if (operation.errors() == null) {
System.out.println("Instance " + instanceId + " was successfully created");
} else {
// inspect operation.errors()
throw new RuntimeException("Instance creation failed");
}
But what should I do if I have en existing instance that I want to start and I want to attach an external IP?
I've tried in this way: first I create a RegionAddressId and get an Address with which I create the networkInterface.
RegionAddressId addressId = RegionAddressId.of("europe-west1", "test-address");
Operation operationAdd = compute.create(AddressInfo.of(addressId));
operationAdd = operationAdd.waitFor();
Address externalIp = compute.getAddress(addressId);
NetworkId networkId = NetworkId.of("default");
NetworkInterface networkInterface = NetworkInterface.builder(networkId)
.accessConfigurations(NetworkInterface.AccessConfig.of(externalIp.address()))
.build();
The I get my instance and add the accessConfig
InstanceId instanceId = InstanceId.of("my-server", "europe-west1-b","my-instance");
Instance instance = compute.getInstance(instanceId);
instance.addAccessConfig("default", NetworkInterface.AccessConfig.of(externalIp.address()));
Operation operation = instance.start();
The result is that my instance is booted with another external IP that I don't know how to obtain.
What is the correct procedure?
Thanks

I've found by myself the solution.
Compute compute = ComputeOptions.defaultInstance().service();
InstanceId instanceId = InstanceId.of("my-server", "europe-west1-b","my-instance");
Operation operation = compute.start(instanceId);
Operation completedOperation = operation.waitFor();
if (completedOperation == null) {
// operation no longer exists
} else if (completedOperation.errors() != null) {
// operation failed, handle error
}
Instance instance = compute.getInstance(instanceId);
String publicIp =
instance.networkInterfaces().get(0).accessConfigurations().get(0).natIp();
I start the instance using the start method of Compute and then (after the operation is completed) I get the instance

Related

I have an issue with Firebase error code 401

So recently I'm trying to make a GlobalBan script for my admin module in ROBLOX, but I came across an error when I tried to do GlobalBan.
Below is the error and script:
local defaultDatabase = "Private";
local authenticationToken = "Private"
local HttpService = game:GetService("HttpService");
local DataStoreService = game:GetService("DataStoreService");
local FirebaseService = {};
local UseFirebase = true;
--== Script;
function FirebaseService:SetUseFirebase(value)
UseFirebase = value and true or false;
end
function FirebaseService:GetFirebase(name, database)
database = database or defaultDatabase;
local datastore = DataStoreService:GetDataStore(name);
local databaseName = database..HttpService:UrlEncode(name);
local authentication = ".json?auth="..authenticationToken;
local Firebase = {};
function Firebase.GetDatastore()
return datastore;
end
function Firebase:GetAsync(directory)
local data = nil;
--== Firebase Get;
local getTick = tick();
local tries = 0; repeat until pcall(function() tries = tries +1;
data = HttpService:GetAsync(databaseName..HttpService:UrlEncode(directory and "/"..directory or "")..authentication, true);
end) or tries > 2;
if type(data) == "string" then
if data:sub(1,1) == '"' then
return data:sub(2, data:len()-1);
elseif data:len() <= 0 then
return nil;
end
end
return tonumber(data) or data ~= "null" and data or nil;
end
function Firebase:SetAsync(directory, value, header)
if not UseFirebase then return end
if value == "[]" then self:RemoveAsync(directory); return end;
--== Firebase Set;
header = header or {["X-HTTP-Method-Override"]="PUT"};
local replyJson = "";
if type(value) == "string" and value:len() >= 1 and value:sub(1,1) ~= "{" and value:sub(1,1) ~= "[" then
value = '"'..value..'"';
end
local success, errorMessage = pcall(function()
replyJson = HttpService:PostAsync(databaseName..HttpService:UrlEncode(directory and "/"..directory or "")..authentication, value,
Enum.HttpContentType.ApplicationUrlEncoded, false, header);
end);
if not success then
warn("FirebaseService>> [ERROR] "..errorMessage);
pcall(function()
replyJson = HttpService:JSONDecode(replyJson or "[]");
end)
end
end
function Firebase:RemoveAsync(directory)
if not UseFirebase then return end
self:SetAsync(directory, "", {["X-HTTP-Method-Override"]="DELETE"});
end
function Firebase:IncrementAsync(directory, delta)
delta = delta or 1;
if type(delta) ~= "number" then warn("FirebaseService>> increment delta is not a number for key ("..directory.."), delta(",delta,")"); return end;
local data = self:GetAsync(directory) or 0;
if data and type(data) == "number" then
data = data+delta;
self:SetAsync(directory, data);
else
warn("FirebaseService>> Invalid data type to increment for key ("..directory..")");
end
return data;
end
function Firebase:UpdateAsync(directory, callback)
local data = self:GetAsync(directory);
local callbackData = callback(data);
if callbackData then
self:SetAsync(directory, callbackData);
end
end
return Firebase;
end
return FirebaseService;
Error Code:
23:50:36.033 TestService: Data table for bans is currently nil. - Studio
23:50:36.255 FirebaseService>> [ERROR] HTTP 401 (Unauthorized) - Studio
I have tried to create new database + renewing my auth token, still getting Error code 401.
I feel like its my auth token issue but at the same time I don't think it is either, hope you guys can help me figure out the error. Feel free to ask if you need more info regarding this issue.
It turns out my auth token was the wrong one, everything's good now

Concurrency issue in flink stream job

I have a flink streaming job which does user fingerprinting based on click-stream event data. Code snippet is attached below
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
// setting event time characteristic for processing
env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
DataStream<EventData> input = ConfluentKafkaSource.
createKafkaSourceFromApplicationProperties(env);
final OutputTag<EventData> emailPresentTag = new OutputTag<>("email-present") {
};
final OutputTag<EventData> dispatchIdPresentTag = new OutputTag<>("dispatch-id-present") {
};
final OutputTag<EventData> residueTag = new OutputTag<>("residue") {
};
SingleOutputStreamOperator<EventData> splitStream = input
.process(new ProcessFunction<EventData, EventData>() {
#Override
public void processElement(
EventData data,
Context ctx,
Collector<EventData> out) {
if (data.email != null && !data.email.isEmpty()) {
// emit data to side output for emailPresentTag
ctx.output(emailPresentTag, data);
} else if (data.url != null && data.url.contains("utm_source=starling")) {
// emit data to side output for dispatchIdPresentTag
ctx.output(dispatchIdPresentTag, data);
} else {
// emit data to side output for ip/campaign attributing
ctx.output(residueTag, data);
}
}
});
DataStream<EventData> emailPresentStream = splitStream.getSideOutput(emailPresentTag);
DataStream<EventData> dispatchIdPresentStream = splitStream.getSideOutput(dispatchIdPresentTag);
DataStream<EventData> residueStream = splitStream.getSideOutput(residueTag);
// process the 3 split streams separately based on their corresponding logic
DataStream<EventData> enrichedEmailPresentStream = emailPresentStream.
keyBy(e -> e.lbUserId == null ? e.eventId : e.lbUserId).
window(TumblingProcessingTimeWindows.of(Time.seconds(30))).
process(new AttributeWithEmailPresent());
DataStream<EventData> enrichedDispatchIdPresentStream = dispatchIdPresentStream.
keyBy(e -> e.lbUserId == null ? e.eventId : e.lbUserId).
window(TumblingProcessingTimeWindows.of(Time.seconds(30))).
process(new AttributeWithDispatchPresent());
DataStream<EventData> enrichedResidueStream = residueStream.
keyBy(e -> e.lbUserId == null ? e.eventId : e.lbUserId).
window(TumblingProcessingTimeWindows.of(Time.seconds(30))).
process(new AttributeWithIP());
DataStream<EventData> dataStream = enrichedEmailPresentStream.union(enrichedDispatchIdPresentStream, enrichedResidueStream);
final OutputTag<EventData> attributedTag = new OutputTag<>("attributed") {
};
final OutputTag<EventData> unattributedTag = new OutputTag<>("unattributedTag") {
};
SingleOutputStreamOperator<EventData> splitEnrichedStream = dataStream
.process(new ProcessFunction<EventData, EventData>() {
#Override
public void processElement(
EventData data,
Context ctx,
Collector<EventData> out) {
if (data.attributedEmail != null && !data.attributedEmail.isEmpty()) {
// emit data to side output for emailPresentTag
ctx.output(attributedTag, data);
} else {
// emit data to side output for ip/campaign attributing
ctx.output(unattributedTag, data);
}
}
});
//splitting attributed and unattributed stream
DataStream<EventData> attributedStream = splitEnrichedStream.getSideOutput(attributedTag);
DataStream<EventData> unattributedStream = splitEnrichedStream.getSideOutput(unattributedTag);
// attributing backlog unattributed events using attributed stream and flushing resultant attributed
// stream to kafka enriched_clickstream_event topic.
attributedStream = attributedStream.windowAll(TumblingProcessingTimeWindows.of(Time.seconds(30))).
process(new AttributeBackLogEvents()).forceNonParallel();
attributedStream.
addSink(ConfluentKafkaSink.createKafkaSinkFromApplicationProperties()).
name("Enriched Event kafka topic sink");
//handling unattributed events. Flushing them to mysql
Properties dbProperties = ConfigReader.getConfig().get(REPORTINGDB_PREFIX);
ObjectMapper objectMapper = new ObjectMapper();
unattributedStream.addSink(JdbcSink.sink(
"INSERT IGNORE INTO events_store.unattributed_event (event_id, lb_user_id, ip, event) values (?,?,?,?)",
(ps, t) -> {
ps.setString(1, t.eventId);
ps.setString(2, t.lbUserId);
ps.setString(3, t.ip);
try {
ps.setString(4, objectMapper.writeValueAsString(t));
} catch (JsonProcessingException e) {
logger.error("[UserFingerPrintJob] "+ e.getMessage());
}
},
JdbcExecutionOptions.builder()
.withBatchIntervalMs(Long.parseLong(dbProperties.getProperty(REPORTINGDB_FLUSH_INTERVAL)))
.withMaxRetries(Integer.parseInt(dbProperties.getProperty(REPORTINGDB_FLUSH_MAX_RETRIES)))
.build(),
new JdbcConnectionOptions.JdbcConnectionOptionsBuilder()
.withUrl(dbProperties.getProperty(REPORTINGDB_URL_PROPERTY_NAME))
.withDriverName(dbProperties.getProperty(REPORTINGDB_DRIVER_PROPERTY_NAME))
.withUsername(dbProperties.getProperty(REPORTINGDB_USER_PROPERTY_NAME))
.withPassword(dbProperties.getProperty(REPORTINGDB_PASSWORD_PROPERTY_NAME))
.build())).name("Unattributed event ReportingDB sink");
env.execute("UserFingerPrintJob");
Steps involved:
Splitting the stream to 3 streams based on 3 criteria and then attributing them with email and then collecting as union of these 3 streams.
Events which are unattributed in above step are sinked to mysql as backlog unattributed events.
Events which are attributed are passed on to AttributeBackLogEvents ProcessFunction. I'm assuming issue is here.
In AttributeBackLogEvents function, I'm fetching all events from mysql which have cookie-id(lb_user_id) or ip present in input attributed events. Those events are then attributed and percolated down to the kafka sink along with input attributed events. For some of these unattributed events, I'm seeing duplicate attributed events with timestamp difference of 30seconds(which is the processing time window). What i think is that while one task of AttributeBackLogEvents function is still processing, a seaparate task is fetching the same events from mysql and both the tasks are processing simultaneously. Basically i want to enforce record level lock in mysql/code so that same event don't get picked up. One way may be to use select for update, but given the size of data can lead to deadlock(or will this approach be useful?). I tried forceNonParallel() method too but isn't helpful.

Cannot Create a Group, Invalid Scope

I am trying to create a group with the following dot.net code:
var groupDef = new Group()
{
DisplayName = name,
MailNickname = name + " " + GetTimestamp(),
Description = "Group/Team created for testing purposes",
Visibility = "Private",
GroupTypes = new string[] { "Unified" }, // same for all teams
MailEnabled = true, // same for all teams
SecurityEnabled = false, // same for all teams
AdditionalData = new Dictionary<string, object>()
{
["owners#odata.bind"] = owners.Select(o => $"{graphV1Endpoint}/users/{o.Id}").ToArray(),
["members#odata.bind"] = members.Select(o => $"{graphV1Endpoint}/users/{o.Id}").ToArray(),
}
};
// Create the modern group for the team
Group group = await graph.Groups.Request().AddAsync(groupDef);
I am getting a "Method not allowed." error thrown on the last line shown (Group group = await ...).
The scope parameter for the auth provider contains "Group.Read.All Group.ReadWrite.All".
If I add Group.Create to the scope I get an error stating the scope is invalid. Reducing the scope to just "Group.Create" also gives an error.
It certainly appears that I cannot create a group without Group.Create in the scope, but that throws an error at sign in.
Microsoft.Graph is version 3.19.0
Microsoft.Graph.Core is version 1.22.0
I ended up serializing the object and making the Http call with my own code. Basically, something like this:
string json = JsonConvert.SerializeObject(groupDef, jsonSettings);
Group group = HttpPost<Group>("/groups", json);
No permissions were changed.

Downloading dwg to Forge

I'm in the process of learning the Forge platform. I'm currently using an example (Jigsawify) written by Kean Walmsley because it most accurately describes my goals. I'm running into an issue of getting my file to download from an Azure Storage Account to Forge. The error I receive is "The value for one of the HTTP headers is not in the correct format." My question is how does someone go about troubleshooting HTTP protocol when writing, in this case, a workitem in code? I can put in a breakpoint to view the workitem, but I'm not versed enough to understand where the flaw is in the HTTP header, or even where to find it. Is there a specific property of the workitem I should be looking at? If I could find the HTTP statement, I could test it, but I don't where I should find it.
Or am I just completely off base?
Anyway here's the code. It's a modified version of what Kean wrote:
static void SubmitWorkItem(Activity activity)
{
Console.WriteLine("Submitting workitem...");
CloudStorageAccount storageAccount =
CloudStorageAccount.Parse(Microsoft.Azure.CloudConfigurationManager.GetSetting("StorageConnectionString"));
StorageCredentials crd = storageAccount.Credentials;
CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
CloudFileShare ShareRef = fileClient.GetShareReference("000scrub");
CloudFileDirectory rootDir = ShareRef.GetRootDirectoryReference();
CloudFile Fileshare = rootDir.GetFileReference("3359fort.dwg");
// Create a workitem
var wi = new WorkItem()
{
Id = "", // Must be set to empty
Arguments = new Arguments(),
ActivityId = activity.Id
};
if (Fileshare.Exists())
{
wi.Arguments.InputArguments.Add(new Argument()
{
Name = "HostDwg", // Must match the input parameter in activity
Resource = Fileshare.Uri.ToString(),
StorageProvider = StorageProvider.Generic // Generic HTTP download (vs A360)
});
}
wi.Arguments.OutputArguments.Add(new Argument()
{
Name = "Results", // Must match the output parameter in activity
StorageProvider = StorageProvider.Generic, // Generic HTTP upload (vs A360)
HttpVerb = HttpVerbType.POST, // Use HTTP POST when delivering result
Resource = null, // Use storage provided by AutoCAD.IO
ResourceKind = ResourceKind.ZipPackage // Upload as zip to output dir
});
container.AddToWorkItems(wi);
container.SaveChanges();
// Polling loop
do
{
Console.WriteLine("Sleeping for 2 sec...");
System.Threading.Thread.Sleep(2000);
container.LoadProperty(wi, "Status"); // HTTP request is made here
Console.WriteLine("WorkItem status: {0}", wi.Status);
}
while (
wi.Status == ExecutionStatus.Pending ||
wi.Status == ExecutionStatus.InProgress
);
// Re-query the service so that we can look at the details provided
// by the service
container.MergeOption =
Microsoft.OData.Client.MergeOption.OverwriteChanges;
wi = container.WorkItems.ByKey(wi.Id).GetValue();
// Resource property of the output argument "Results" will have
// the output url
var url =
wi.Arguments.OutputArguments.First(
a => a.Name == "Results"
).Resource;
if (url != null)
DownloadToDocs(url, "SGA.zip");
// Download the status report
url = wi.StatusDetails.Report;
if (url != null)
DownloadToDocs(url, "SGA-Report.txt");
}
Any help is appreciated,
Chuck
Azure requires that you specify the x-ms-blob-type header when you upload to a presigned URL. See https://github.com/Autodesk-Forge/design.automation-.net-input.output.sample/blob/master/Program.cs#L167
So, I was able to figure out how to download my file from Azure to Forge using Albert's suggestion of moving to a blob service. Here's the code:
static void SubmitWorkItem(Activity activity)
{
Console.WriteLine("Submitting workitem...");
CloudStorageAccount storageAccount =
CloudStorageAccount.Parse(Microsoft.Azure.CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudBlobClient BlobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer cloudBlobContainer = BlobClient.GetContainerReference("000scrub");
CloudBlockBlob blockBlob = cloudBlobContainer.GetBlockBlobReference("3359fort.dwg");
// Create a workitem
var wi = new WorkItem()
{
Id = "", // Must be set to empty
Arguments = new Arguments(),
ActivityId = activity.Id
};
if (blockBlob.Exists())
{
wi.Arguments.InputArguments.Add(new Argument()
{
Name = "HostDwg", // Must match the input parameter in activity
Resource = blockBlob.Uri.ToString(),
StorageProvider = StorageProvider.Generic, // Generic HTTP download (vs A360)
Headers = new System.Collections.ObjectModel.ObservableCollection<Header>()
{
new Header() { Name = "x-ms-blob-type", Value = "BlockBlob" } // This is required for Azure.
}
});
}
wi.Arguments.OutputArguments.Add(new Argument()
{
Name = "Results", // Must match the output parameter in activity
StorageProvider = StorageProvider.Generic, // Generic HTTP upload (vs A360)
HttpVerb = HttpVerbType.POST, // Use HTTP POST when delivering result
Resource = null, // Use storage provided by AutoCAD.IO
ResourceKind = ResourceKind.ZipPackage, // Upload as zip to output dir
});
container.AddToWorkItems(wi);
container.SaveChanges();
// Polling loop
do
{
Console.WriteLine("Sleeping for 2 sec...");
System.Threading.Thread.Sleep(2000);
container.LoadProperty(wi, "Status"); // HTTP request is made here
Console.WriteLine("WorkItem status: {0}", wi.Status);
}
while (
wi.Status == ExecutionStatus.Pending ||
wi.Status == ExecutionStatus.InProgress
);
// Re-query the service so that we can look at the details provided
// by the service
container.MergeOption =
Microsoft.OData.Client.MergeOption.OverwriteChanges;
wi = container.WorkItems.ByKey(wi.Id).GetValue();
// Resource property of the output argument "Results" will have
// the output url
var url =
wi.Arguments.OutputArguments.First(
a => a.Name == "Results"
).Resource;
if (url != null)
DownloadToDocs(url, "SGA.zip");
// Download the status report
url = wi.StatusDetails.Report;
if (url != null)
DownloadToDocs(url, "SGA-Report.txt");
}
What isn't complete is the result section. The ZIP has nothing in it, but hey, baby steps right?
Thanks Albert.
-Chuck

How to register Quartz Scheduler with Windsor?

What I have tried so far?
container.Register(Component.For<Quartz.IScheduler>()
.UsingFactoryMethod(() => GetQuartzScheduler())
.LifeStyle.PerWebRequest);
Inside GetQuartzScheduler()
string tcp = string.Format("tcp://DevMachine:8888/QuartzScheduler");
NameValueCollection properties = new NameValueCollection();
properties["quartz.scheduler.instanceName"] = string.Format("MyQuartz_{0}", alias);
// set thread pool info
properties["quartz.scheduler.proxy"] = "true";
properties["quartz.threadPool.threadCount"] = "0";
properties["quartz.scheduler.proxy.address"] = tcp; //tcp variable is set before
Quartz.ISchedulerFactory sf = new Quartz.Impl.StdSchedulerFactory(properties);
sched = sf.GetScheduler(); //<--**Throws exception**
The exception is:
Factory method creating instances of component 'Late bound Quartz.IScheduler' returned null. This is not allowed and most likely a bug in the factory method.
Any suggestions to correct this?