Power Automate: Create SharePoint Folder - Response not in JSON Format - json

I am working on a Power Automate flow....and beating my head against a wall as well. It seems so easy, but it is failing with an error "The Response is not in a JSON format"
The intention of the flow is to set up standard folders in some 200 SharePoint sites within my company. In two files on each of these sites, I want to add a Year Folder (i.e. 2022) and a Month Folder (10-Oct). Seems straight forward.
I have a SharePoint list that contains all the SharePoint sites. After manually triggering this flow, it references the SharePoint List (Get Items). Then working down the list of SharePoint site (Apply to Each), Create the new folders. Been researching & tinkering with this for a couple of hours.
The "Directory" is pulled from the SP List as well: for this example assume it is "Share Documents1" thou it does vary slightly around the company...
The naming is all correct....
Here is the Error output. I am at a loss...

I see there are some strange characters in your site address value. :f:/r/ Can you remove that part from the value?

This error was thrown due to extra characters in the SharePoint Site Address. When using the "Copy Link" feature within SharePoint, it adds extra characters that are not required, in this case they were :f:/r/.
Once removed, the JSON error was resolved and the flow worked perfectly. Lesson learned, inspect the address for additional strings that are not truly required, remove those, test the link to ensure still valid, then utilize the streamlined link.

Related

Convert a JSON Response to pdf - NodeJS

When a get request is sent to: 'http://localhost:4000/features'
There is a response with JSON Data which has HTML inside it.
I need the contents of the field name and description to be saved as PDF
Sample:
[{"_id":"5ad4951d0ba1c37c65818bc7","name":"Find your work faster","description":"<p>With an improved <strong>quick search</strong>, searching through all your issues and projects will be nothing else but a breeze. Whether you know the full issue key, part of the issue name, or just have a distant memory of a project from a year ago, start typing the words, and we’ll do the rest for you. The quick search instantly shows the most relevant results, and refreshes them whenever you change your search term.</p>\n\n<p><img alt=\"\" src=\"https://confluence.atlassian.com/jirasoftware/files/945521251/945528523/1/1518181922686/quicksearch.png\" style=\"height:400px; width:800px\" /></p>\n\n<p>If you’ve already found what you were looking for, just treat quick search as a handy work diary. Click anywhere in the box to see the issues and projects you’ve been working on recently, and have the most important work always at your fingertips.</p>\n\n<p>Learn more</p>\n","__v":0},{"_id":"5ad5ddddcd054b2b5b20143c","name":"Project sidebar","description":"<p>The project sidebar that we previewed in JIRA 6.4 is here to stay. We built this new navigation experience to make it easier for you to find what you need in your projects. It's even better, if you are using JIRA Agile: your backlog, sprints, and reports are now just a click away. If you've used the sidebar with JIRA Agile before, you'll notice that cross-project boards, which include multiple projects, now have a project sidebar as well — albeit a simpler version.</p>\n","__v":0}]
Can this be done in nodeJS?
Conversion isn't the right word but generation is. According to the generalized response in json response you can write logic for generation of pdf from it in node-js server.
PDFKit and PDFmake are two good libraries for this purpose.
I've used pdfmake and is very good.
See doc here: https://pdfmake.github.io/docs/
Use html-pdf to generated PDF from html, Where it works on top of phantom
var pdf = require('html-pdf');
pdf.create(file[0].description).toFile('./' + file[0].description + '.pdf', function (err, res) {
console.log(res.filename);
});
Note : Sample code snippet above to handle first object in your array

Inconsistency in MS Graph API behaviour for onenote

When a section renamed get sections API doesn't reflect the updated name whereas get page api shows updated parent section name. This seems to be bug/ data inconsistency in ON API.
On change of anything at page level updates the lastModifiedDateTime for a section but nothing gets changed at notebook level. This again seems to be like some data inconsitency issue.
Can somebody clear this confusion.
(Note - All above can be tested using MS Graph API Explorer
)
These are two separate topics:
Section renaming
This is a known limitation/bug in OneNote - if you rename a section in OneNote Online (in your browser), then the API GET ~/notebooks/id/sections or GET ~/sections will give you the "old" name. This is because OneNote Online doesn't actually rename a file, it only marks the file as "to be renamed" - if you were to look at the file itself in OneDrive/SharePoint it would still have the old name.
Once the OneNote Native Client sees the section (for example OneNote for Windows) sees the section that has been marked as "to be renamed", it actually renames the file.
The OneNote API GET ~/sections/id/pages actually looks at the section binaries and is able to tell whether the section is renamed or not, which is why that name can be trusted as the "most up to date" one.
I have communicated this feedback to our team and we are exploring alternatives - I encourage you to start an item in uservoice so we can better understand impact.
https://onenote.uservoice.com/forums/245490-onenote-developer-apis
LastModifiedTime (LMT) on notebook/section clarifications:
The LMT of a section is equal to max(LMT of pages under it).
The LMT of a section group however is not max (LMT of sections and section groups under it). A section group is a folder and its LMT should behave like that of a folder in a traditional file system (reflects time of last add/delete of a file/folder directly under it).
However, there is nothing stopping you from using $expand and calculating the LMT (as you understand it) yourself based on the entities below the notebook/section group.
https://blogs.msdn.microsoft.com/onenotedev/2014/12/16/beta-get-onenote-entities-in-one-roundtrip-using-expand/

No Google BigQuery table created after importing data through webclient

I'm currently familiarizing myself with Google BigQuery by working through the examples at https://cloud.google.com/bigquery/web-ui-quickstart. Doing a query over the pubic datasets runs fine.
I run into problems when uploading custom data into a new table through the WebUI. I create a new dataset and table, and upload the csv file provided with the example case. As in the example I input the schema and submit the file. Now the upload window stays on top and turns grey as if it's working. Nothing seems to happen afterwards though. When clicking away the upload window after a long wait, the table seems to be created in the tree on the left. However, when clicking on the table an error is shown:
"Unable to find table: ndwtest-984:csvtest.csvdata"
This seems like a trivial action, however I cannot seems to get it to work. I've tried varies different files, uploaded the file to Google Cloud Storage first and played around with the advanced options the last two days, but keep getting the same error.
Help would be much appreciated.
Some steps to help you:
billing must be enabled
you need to choose to upload one single TXT file from the example eg: yob2013.txt and not the zip file
make sure the schema is entered as text: name:string,gender:string,count:integer
on the last screen of the wizard you don't need to change the default CSV option parameters (for demo purposes works as it is)
I just tried the example, and it does work for me. In case you still have errors, than you can check your Job History menu in the Web UI, direct link would be, warning you need to put your Id in the link.
https://bigquery.cloud.google.com/jobs/YOUR_ANONYMOUS_PROJECT_ID_HERE?pli=1

SSRS Data driven subscription with Html format

I need data driven subscription on a report in ssrs 2008 that put html or mhtml into email message. The reason is to have opportunity to view reports in mobile devices without additional applications except email, thats why I cant use pdf or excel format.
The report contains images(arrows) that sows dynamics and I have a an issue with it.
When I creates I convert report into mhtml file by clicking on “Export” button everything is fine, because ssrs saves html file and images into one folder.
But when ssrs creates and sends emil with mhtml format I got this
It puts html file and images into different attachments so I can't see good viewed report
I tried to create subscription with “html 4.0” format but result is almost the same with one distinction it is that ssrs does not put images into email but creates links to the server where it saves that images. It saves imeges to the ssrs server and if person wants to get this image server ask for credentials to this server. We can't give credentials to all people that got emails.
I tried to create report with linked images. I saved arrows on the server that does not need credentials for getting images and made some changes to the RSReportServer.Config. I posted link to image folder in the server to the ”My new server name//folder” , I hoped to succeed and I almost got it.
But when I got eMail with “html 4.0” format there still were no images. Links to images have proper link to server folder (”My new server name//folder”) but images have another names something like “fbb5b4b7966442dbab886051839d93c0” except “arrow_up.jpg”. I think ssrs generate code for images and creates link using this code but not the real names.
Do you have any ideas how to fix this issue. Or how to create data driven subscription that generates mhtml or Html report with proper view. Other topics doesn't give any answer on this question.
Thank you
Cheat!
In your specific scenario (at least as far as the examples in your question goes) you may be able to evade this issue entirely by using unicode characters and appropriate text colors. For example the Arrows block contains:
↑ ↗ ↓ ↘
And the Geometric shapes block contains:
▲ ▴ ▼ ▾ ▬
Use expressions to make sure the "up" items are green, "down" is red, and "equal" is yellow, and you're good to go.

Drupal 7 - Adding Nodes Through phpmyadmin is not Working Correctly

I have received a Microsoft Access database file and was tasked to convert the contents into something readable by mySQL standards for a Drupal 7 website database. I managed to upload them into the "node" table successfully, with the correct content type classification, unique primary keys and node IDs, etc. Or so I thought.
When I checked the Drupal site, I checked the list of content type X, and all of the new stuff was there. However, when I tried to click them, instead of opening the new page like I expected it to, I received a "page not found" message, instead. I tried looking for the new content manually via "Find Content", but none of it was showing up. I checked entity reference lists that referenced content type X, but they were not showing up on those lists, either.
I checked which fields were required for content type X, and I found that "location category" and "address" were required fields. So to test, I only added 1 entry to each of those tables (both field_data and field_revision versions of the required field), representing the 1st of the many I tried to transfer over. Still nothing. I have no idea what I could be doing wrong. Can anyone offer some insight?
Adding content to Drupal through the database is absolutely the wrong way to go about creating content. I suggest you try any of the following methods:
Create the nodes programmatically using Drupal's API functions:
http://fooninja.net/2011/04/13/guide-to-programmatic-node-creation-in-drupal-7/
Upload data through a CSV file using the Feeds module:
http://drupal.org/project/feeds/