I have exported 50 EEG example from SciChart aplication to a standalone solution.
When i start debug, lines are not drawn, but the code is running and generated data seems ok.
I have not made any modifications the the example code.
According to the SciChart Forums, the chart being blank is likely related to trial licensing, which needs to be set.
See Licensing SciChart WPF for how to get a trial license key.
See App.xaml.cs of any exported project to see where to set the license.
using System.Windows;
using SciChart.Examples.ExternalDependencies.Controls.ExceptionView;
namespace SciChartExport
{
/// <summary>
/// Interaction logic for App.xaml
/// </summary>
public partial class App : Application
{
public App()
{
InitializeComponent();
// TODO: Put your SciChart License Key here if needed
// SciChartSurface.SetRuntimeLicenseKey(#"{YOUR SCICHART WPF v6 LICENSE KEY}");
}
}
}
Related
can somebody help me with next question, we need use OCR technology but we don't need all text, only some fields from invoices and receips, can't find what is better solution for this
If you want to start mobile app development, you can use the Text Recognition Service, at the core of which is OCR technology. For example, Huawei ML Kit Text recognition API, Google Firebase ML's text recognition API, etc. You can extract the required text information(invoices and receipts if you need) ) through code.
I will list main procedures for ML Kit Text recognition integration, you can also download the demo on Github.
Preparations
1). Configure the Maven Repository Address in the Project-Level build.gradle File
buildscript {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
allprojects {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
2). Add Configurations to the File Header
apply plugin: 'com.android.application'apply plugin: 'com.huawei.agconnect'
3). Configure SDK Dependencies in the App-Level build.gradle File
dependencies {
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-ocr:2.0.1.300'
// Import the Latin character recognition model package.
implementation 'com.huawei.hms:ml-computer-vision-ocr-latin-model:2.0.1.300'
// Import the Japanese and Korean character recognition model package.
implementation 'com.huawei.hms:ml-computer-vision-ocr-jk-model:2.0.1.300'
// Import the Chinese and English character recognition model package.
implementation 'com.huawei.hms:ml-computer-vision-ocr-cn-model:2.0.1.300'
}
4). Add these Statements to the AndroidManifest.xml File so the Machine Learning Model can Automatically Update
<manifest>
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value="ocr" />
...
</manifest>
5). Apply for the Camera Permission
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.INTERNET" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
2. Code Development
1). Create an Analyzer
MLTextAnalyzer analyzer = new MLTextAnalyzer.Factory(context).setLanguage(type).create();
2). Set the Recognition Result Processor to Bind with the Analyzer
analyzer.setTransactor(new OcrDetectorProcessor());
3). Call the Synchronous API
Use the built-in LensEngine of the SDK to create an object, register the analyzer, and initialize camera parameters.
lensEngine = new LensEngine.Creator(context, analyzer)
.setLensType(LensEngine.BACK_LENS)
.applyDisplayDimension(width, height)
.applyFps(30.0f)
.enableAutomaticFocus(true)
.create();
4). Call the run Method to Start the Camera and Read the Camera Streams for the Recognition
try {lensEngine.run(holder);} catch (IOException e) {// Exception handling logic.Log.e("TAG", "e=" + e.getMessage());}
5). Process the Recognition Result As Required
public class OcrDetectorProcessor implements MLAnalyzer.MLTransactor<MLText.Block> {
#Override
public void transactResult(MLAnalyzer.Result<MLText.Block> results) {
SparseArray<MLText.Block> items = results.getAnalyseList();
// Process the recognition result as required. Only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
…
}
#Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
6). Stop the Analyzer and Release the Detection Resources When the Detection Ends
if (analyzer != null) {
try {
analyzer.stop();
} catch (IOException e) {
// Exception handling.
}
}
if (lensEngine != null) {
lensEngine.release();
}
You can also refer to this medium article here.
For more, see official documentation.
there are a couple of solutions that I am aware of from the LEADTOOLS SDK. Both the Master Form editor and the Document Analyzer sound like they could be useful to your use case. In the interest of full discloser, it is the company that I work for, but I would be remiss not mention it as a possible solution for your scenario.
This is a link to a YouTube video describing how the Master Forms Editor technology works.
https: //www.youtube.com/watch?v=wo6TGcdrtb4
It essentially allows you to define pre-set zones on various forms, and then compare the document to the forms in the repository. Then it will extract the data from those zones and you can manage the OCR'd data however you would like.
As well as a link to Online Documentation that shows the coded implementation of some of the functionality:
https://www.leadtools.com/help/sdk/v21/dh/to/steps-to-generate-a-master-form-and-save-it-to-a-master-repository.html
Here is a link to the other functionality that I mentioned - Document Analyzer - that one of my colleagues goes over in the Microsoft Build Post Show.
https://www.leadtools.com/blog/general/microsoft-build-post-show-v21-document-analyzer-demo/
The code for these examples, and the technology used, can be found in a 60-day free trial offer that can be downloaded from the LEAD site if you are interested in checking it out.
https://www.leadtools.com/downloads
I have a simple set up:
Azure Web App, running a static react app
Azure Functions App, the API layer that accesses the database and that is called from the static web app
Both Web App and Functions App have a deployment slot feature, where you deploy in a separate slot first and if everything works well, you can swap the artifact in your slot and the current version, with no downtime. I really want to use this to its fullest.
I'd like to use the Web App configuration to inject the root uri of the API, have it point to the API in the corresponding slot. So the production-staging static site, should point to the production-staging API.
But here's the main problem: I cannot access the Web App configuration from my react app. I have to insert the root uri at build time, which disables the swap feature for the Web App (since it would still be pointing to staging).
Accessing the configuration works fine for the Functions App; I'm assuming because it's running node.
The Web App Configuration are available as environment variables on the server. You won't be able to access those variables within your static react app that is running on the client.
You will need some kind of middleware that is able to read and expose the environment variables through an API.
You can use ASP.NET Core with the React project template to create both, an ASP.NET Core project that acts as an API and a standard CRA React project to act as a UI, but with the convenience of hosting both in a single app project that can be built and published as a single unit. (Source).
Then you will have to write a little controller that exposes the configurations. Here an example:
public class MyOptions
{
public string ApiUri { get; set; }
}
[ApiController]
[Route("[controller]")]
public class ConfigurationController : ControllerBase
{
private readonly MyOptions _options;
public ConfigurationController(IOptions<MyOptions> options)
{
_options = options.Value;
}
[HttpGet]
public MyOptions GetConfigurations()
{
return _options;
}
}
You also need to configure the options within the startup.cs:
public void ConfigureServices(IServiceCollection services)
{
services.Configure<MyOptions>(Configuration.GetSection(nameof(MyOptions)));
services.AddControllers();
}
Now you can set your initial value within the appsettings.json:
{
"MyOptions": {
"ApiUri" : "https://myapp.domain.com/api"
}
}
And you are also able to overwrite the options using the Azure Web App Configurations (the middleware is configured to also use environment variables and that environment variables overwrite appsettings.json)
Now the last thing you have to do is to retrieve the settings within your static UI using:
window.location.host + "/api/configuration"
Client code cannot access appsettings.json. In react you can use.env files to store your configurations. You can create.env files for each environment you want to support and in the build script you can mention which.env file to use for each environment.
Straightforward question is: are Microsoft.Extensions.Options.IOptions meant to be used only within the context of umbrella app (web app in this case) or in class libraries also?
Example:
In a n-layered, asp.net core app we have services layer that is dependant on some settings coming from appsettings.json file.
What we first started with is something along these lines in Startup.cs:
services.Configure<Services.Options.XOptions>(options =>
{
options.OptionProperty1 = Configuration["OptionXSection:OptionXProperty"];
});
And then in service constructor:
ServiceConstructor(IOptions<XOptions> xOptions){}
But that assumes that in our Service layer we have dependecy on Microsoft.Extensions.Options.
We're not sure if this is recomended way or is there some better practice?
It just feels a bit awkward our services class library should be aware of DI container implementation.
You can register POCO settings for injection too, but you lose some functionalities related to when the appsettings.json gets edited.
services.AddTransient<XOptions>(
provider => provider.GetRequiredService<IOptionsSnapshot<XOptions>>().Value);
Now when you inject XOptions in constructor, you will get the class. But when your edit your appsettings.json, the value won't be updated until the next time it's resolved which for scoped services would be on next request and singleton services never.
On other side injecting IOptionsSnapshot<T> .Value will always get you the current settings, even when appsettings.json is reloaded (assuming you registered it with .AddJsonFile("appsettings.json", reloadOnSave: true)).
The obvious reason to keep the functionality w/o pulling Microsoft.Extensions.Options package into your service/domain layer will be create your own interface and implementation.
// in your shared service/domain assembly
public interface ISettingsSnapshot<T> where T : class
{
T Value { get; }
}
and implement it on the application side (outside of your services/domain assemblies), i.e. MyProject.Web (where ASP.NET Core and the composition root is)
public class OptionsSnapshotWrapper<T> : ISettingsSnapshot<T>
{
private readonly IOptionsSnapshot<T> snapshot;
public OptionsSnapshotWrapper(IOptionsSnapshot<T> snapshot)
{
this.snapshot = snapshot ?? throw new ArgumentNullException(nameof(snapshot));
}
public T Value => snapshot.Value;
}
and register it as
services.AddSingleton(typeof(ISettingsSnapshot<>), typeof(OptionsSnapshotWrapper<T>));
Now you have removed your dependency on IOptions<T> and IOptionsSnapshot<T> from your services but retain all up advantages of it like updating options when appsettings.json is edited. When you change DI, just replace OptionsSnapshotWrapper<T> with your new implementation.
I try to set up an project with MVVM Cross in Windows Phone 8.1 Universal App. I used this tutorial: https://github.com/MvvmCross/MvvmCross/wiki/Tip-Calc-A-Universal-Windows-App-UI-Project
Now I always get the following exception:
Program does not contain a static 'Main' method suitable for an entry point [Project].WindowsPhone
In the App Project is the entry point defined as App.cs In this class I only changed this:
var setup = new Setup(rootFrame);
setup.Initialize();
var start = Mvx.Resolve<IMvxAppStart>();
start.Start();
And this setup class:
public class Setup : MvxWindowsSetup
{
public Setup(Frame rootFrame) : base(rootFrame)
{
}
protected override IMvxApplication CreateApp()
{
return new Core.App();
}
}
Does anyone have an idea what's the reason for that? o.O
Thanks
NPadrutt
EDIT: I could solve it with creating a new project and add the hot tuna starter package. From there I added the Android and ios files from the other project one by one again.
The solution is to set the "Build Action" of your App.xaml file to "ApplicationDefinition".
If you did what I did, you at some point added an App.xaml file from scratch and this sets the build definition incorrectly.
I'm trying to add a DLL to my MvvmCross.core library project. However the included namespaces cannot be resolved for some reason, when I'm trying to refer the namespaces from one of the ViewModels. In object viewer I can see the included namespaces.
When I refer the same DLL from MvvmCross.Droid project I do not see the problem.
Unfortunate I do not have the source code so I need to refer it as a DLL.
I have tried this both on VS2013 and Xamarin Studio
Is your MvvmCross.core project a portable class library? If it is you won't be able to reference it.
What you can do is create another platform specific project, MyThing.Droid, and reference the .DLL. In the MvvmCross.core project, create an interface, IMyThingService. In MyThing.Droid create, MyThingService that implements IMyThingService and does the stuff you want. Now you can get a reference to IMyThingService and call DoStuff() from the MvvmCross.core project.
You can also use the plugin model provided by MvvmCross to accomplish this.
public class MyThingService : IMyThingService
{
public void DoStuff()
{
}
}
public interface IMyThingService
{
void DoStuff();
}