BLEU score from Microsoft Translator Hub is 0.00 - microsoft-translator

I just started exploring Microsoft Translator Hub and my question is if i use dictionary to preserve word from being translated from english to korean, then steps for it will be :
Inside my excel, for the first row i will add the language code
For the second row, it will be the value to preserve. For example: Under 'en' column, i will put 'BACK' as the value and under 'ko' column, i will also put 'BACK' as the value.
Upload it as a document.
Uncheck all the other document under training tab, check the document just uploaded under dictionary tab.
Start training the document.
So, ive done all of this steps but the BLEU score still comes out 0.00. Am i doing it wrong? Did i understand the use of Translator Hub wrongly also?
Thank you very much in advanced.

Creating a dictionary-only training in the Microsoft Translator Hub does not produce a BLEU score. This is expected. You didn't upload any training, tuning or test set for this type of training. Refer to the Translator Hub User Guide section 2.6. Dictionary-only training.
To answer your second question, if the training is successful, the 'Evaluate Results' tab shows the machine translation of sentences that were a part of the test dataset. Refer to the Hub User Guide, section 3.3.5. Evaluate Results.
As there was no test set uploaded, the tab shows nothing. This is expected.

Related

Labview pop up requesting folder/file

I just encountered an issue with a Labview project.
Background
The software in question is usually a standalone application, but for the sake of debugging purposes we found a way to run it in the Labview environment with the source files.
Issue
When we press the run command (which is not broken btw), it starts processing the files I guess, and at some point a folder explorer will pop up without further detail on what it is requesting. We have been trying to select the MAIN folder (where the MAIN.VI is), the SOURCE folder which contains all the VIs and subVIs of the project, but either way it just updates a log tab with the text "The application has stopped"(which I assume is due to us not selecting the correct file/folder).
I guess my main questions are,
Is there a way to tell what this pop up is expecting us to select?
Are there known function blocks which could be asking for a file/folder path?
Additional information*
A couple of months ago, someone knew this path and we have run it correctly, but he just forgot it, so that is why I am certain that it works this way. It runs in a Labview 13 environment.
Any help is greatly appreciated.
Greetings.
Try searching the VI Hierarchy by name for likely culprits:
Open a VI or project and select View»VI Hierarchy to display the VI
Hierarchy window.
Initiate a search by typing the name of the item
you want to find anywhere in the window. As you type the text, the
search string appears, displaying the text as you type. LabVIEW
highlights one item at a time whose name begins with the search
string.
If there is more than one item with a name that begins with
the search string, press the Enter key to search for the next item
that matches the search string. Press the Shift-Enter keys to find
the previous item that matches the search string.
I'm pretty sure all the LabVIEW primitives that can display a file or folder dialog have either file or folder in their names but if that doesn't help you could also try save or write.
If you find more than one result, set breakpoints on them before running the code. When execution reaches the breakpoint it will halt and highlight the breakpoint position; you can then use the Step In / Step Over to check whether that's the node that triggers the dialog (and the Pause button to continue execution if not).

How to Access NYC ACRIS Real Property Master via SODA2

How do we access NYC ACRIS Real Property Master via SODA2?
To recreate the issue:
go to: (step 1)
https://data.cityofnewyork.us/City-Government/ACRIS-Real-Property-Master/bnx9-e6tj
Navigate to Export -> SODA API - API Docs
you will end up here: (step 2)
https://dev.socrata.com/foundry/#/data.cityofnewyork.us/bnx9-e6tj
We see this authorization screen and even after allow'ing the same screen appears again and again, it seems to be in a loop. Can you help? This works for tiny files but not the ACRIS Master File, it appears.
Here's a picture of step 2 above:
Unfortunately the ACRIS Real Property Master filtered view is a bit of a weird case that the City of New York has set up for their data.
That filtered view (denoted by the blue "funnel" icon you see at the upper left) is actually a limited access version of a dataset that has been made private, hence the page asking you to authenticate before viewing the API docs for the base dataset.
You can make a limited class of queries through the API endpoint for the filtered view itself:
https://data.cityofnewyork.us/resource/bnx9-e6tj.json
What are you looking to do?

Is there any easy way to find all usages of a dataset filed in SSRS desinger?

I have a dataset with 15 fields used in different expressions of different textboxes. Is there any easy way to see where each of these fields are used? I mean to get a list of all usages?
I'm using Business Intelligence Development Studio 2008.
This might help ..
In Business Intelligence Development Studio 2008 with the project open.
Click menu .. Edit >> Find and Replace >> Find in Files >
A dialog window will appear for you to search the string. Type in the string you want to search. You should have "Look In:" default to "Entire Solution". Click Find All button
You should be able to see a Find Results window anchored somewhere (on mine it's at the bottom). You will see all the places in your report with the string. Double click on the line will take you to the code where the string is.
Note: Be careful not to enter any values into the code by mistake and save the report form code window as this may break your report.
Sorry not enough points to post pictures.

creating and run your own algorithms on localized map

So here is my problem. I plan to implement a localized map for my college presenting all the locations such as main block, Tech park etc. Not only do i plan to develop a GUI but also I also want to run my own algorithms, such as finding the quickest route from one block to another etc (Note: the algorithm is something i will be writing since i don't want to take the shortest route as the quickest but want to add my own parameters as weights). I want to host the map locally (say on a in house system) and should be able to cater real time request (displaying route to the nearest cafeteria) and display current data (such as what event is taking place in what corner of the campus). I know Google Maps API or Openstreetmap/OpenLyers API will enable me to build my own map, but can i run my own algorithms on them? also can I add elements that i have created and replace the traditional building/office components with my own?
You can do the following :
1. Export a part of open street map from their website. (go to the export tab)
2. Use ElementTree in python to parse the exported the xml data.
3. Use networkx to add the parsed data into a graph.
4. Run your algorithms on it.

How to extract data from an embed Raphael dataset to CSV?

Attempting to extract the data from this Google Politics Insights webpage from "Jan-2012 to the Present" for Mitt Romney and Barack Obama for the following datasets:
Search Trends Based on volume
Google News Mentions Mentions in articles and blog posts
YouTube Video Views Views from candidate channels
For visual example, here's what I mean:
Using Firebug I was able to figure out the data is stored in a format readable by Raphael 2.1.0; looked at the dataset and nothing strikes me as a simple way to convert the data to CSV.
How do I convert the data per chart per presidential candidate into a CSV that has a table for "Search Trends", "Google News Mentions", and "YouTube Video Views" broken down by the smallest increment of time with the results measured in the graph are set to a value of "0.0 to 1.0"? (Note: The reason for "0.0 to 1.0" is the graphs do not appear to give volume info, so the volume is relative to the height of the graph itself.)
Alternatively, if there's another source for all three datasets in CSV, that would work too.
First thing to do is to find out where the data comes from, so I looked up the network traffic in my developer console, and found it very soon: The data is stored as json here.
Now you've got plenty of data for each candidate. I don't know exactly in what relation these numbers are but they definitely are used for their calulation in the graph. I found out that the position in the main.js is on line 392 where they calculate the data with this expression:
Math.log(dataPoints[i][j] * 100.0) / Math.log(logScaleBase);
My guess is: Without the logarithm and a bit exponential calculation you should get the right results.