Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 days ago.
Improve this question
Dear dark network analysts, covert network lovers, sociologists with interests in secret societies, and practitioners of network analysis in police and in journalism,
This week, I am modeling nodal attacks on a well-known Italian mafia network in the United States, I received as a gift from a research team which managed to publish recently with Social Networks. Typically, in a node attacks paper, network scientists sequentially remove some element from the network and monitor a number of graph connectivity measures (e.g., Albert, Jeong & Barabási 2000). We have seen from the network literature the following attack strategies on covert networks:
target attacks, or attacks on most central actors (e.g., Xu and Chen 2008)
random attacks (e.g., Xu and Chen 2008)
bridge and broker attacks (e.g., Xu and Chen 2008)
hubs removal
What other original structural attacks on criminal networks could you think of? I am particularly interested in implementing node removal strategies which are being used in the police.
What innovations in nodal attacks could I immediately propose? We could attempt to remove entire micro structures from the dark network in some order and see what happens with its connectivity. Federico Varese and Diego Gambetta wrote extensively on the importance of the triad in criminal organizations (e.g., Gambetta 2000; Varese 2000) and the triad as a business (e.g., Chu 2000). Mark Lauchs wrote two papers on blowing the whistle in chains of corrupt transactions and how these chains sequentially fail (e.g., Lauchs et al. 2011, 2012). Mark Granovetter briefly sketched a few sociological remarks on market corruption and network corruption (Granovetter 2004). I guess his network argument could be applied to the removal of entire k-cores, n-clubs or n-cliques of corruption monopolies or oligopolies in the network market of organized crime.
triads
chains and certain paths
cliques
small ego networks
partial branches and hierarchies
k-cores
n-clubs
Furthermore, I have also been interested in monitoring murder in covert networks applying dreadful historically documented techniques of personnel removal utilized in mafias, gangs, terrorist networks, crime rings, and military organizations, such as:
astrological murder
alphabetical murder
black lists
kinship bloodshed violence, e.g., taxing with first sons in family
extinction of entire bloodlines, e.g., the witches in The Vampire Diaries with Nina Dobrev
eradication of entire crime families
Clearly these aren’t structural attacks, yet some of their motivations may hide network motives. Is there a movie called, The network murder? Regrettably, I only know Andrew Papachristos’ “Murder by structure,” (2009) a paper which was originally coined to me by the American sociologist Peter Bearman. Obviously, an original source for modeling network attacks could be chapter “Death” from Federico Varese’s Mafia Life (2018). The books features a number of original feuds and interclan wars in the mafia, which aren't necessarily network-based, yet they are tremendously smart economic-wise.
For the lazy math geeks out there who don't have time to read many books or review the voluminous Italian literature on the topic, I recommend going to the Italian mafia movies during the weekends.
References
Albert, R., Jeong, H., & Barabási, A. L. (2000). Error and attack tolerance of complex networks. Nature, 406(6794), 378-382.
Chu, Y. K. (2002). The triads as business. Routledge.
Gambetta, D. (2002). “Corruption: An analytical map.”
Granovetter, M. (2004). “The Social Construction of Corruption.” Department of Sociology.
Lauchs, M., Keast, R., & Chamberlain, D. (2012). Resilience of a corrupt police network: the first and second jokes in Queensland. Crime, law and social change, 57, 195-207.
Lauchs, M., Keast, R., & Yousefpour, N. (2011). Corrupt police networks: uncovering hidden relationship patterns, functions and roles. Policing & society, 21(1), 110-127.
Papachristos, A. V. (2009). Murder by structure: Dominance relations and the social structure of gang homicide. American journal of sociology, 115(1), 74-128.
Varese, F. (2000). “Pervasive corruption.” Economic crime in Russia, 99-111.
Varese, F. (2018). Mafia life: Love, death, and money at the heart of organized crime. Oxford University Press.
Xu, J., & Chen, H. (2008). The topology of dark networks. Communications of the ACM, 51(10), 58-65.
(google scholar, ameliorate the referencing whenever possible)
Currently, I am experimenting with removing entire crime families, blood lines and smaller network structures. This is also a brilliant way to improve the brainwaver package in R, which at present is somewhat limited (for my needs at least).
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 days ago.
Improve this question
I am trying to combine the target and random attacks model in Albert, Jeong, and Barabási (2000) with gossip diffusion on a given organized crime network.
In my theoretical framework, gangsters gossip about the death of other workmen elsewhere in their network neighborhood. Hence, the gangsters learn the condolence news as their camarades die out sequentially every period either through social ties (social learning) or through exogenous sources (asocial learning). What’s unfortunate, some gangsters may not survive another day to hear the next death news.
For the task, I am using the Dai Shizuka’s code for diffusion through social learning (via social ties) and through asocial learning (exogenous to the network sources) in social networks as illustrated in the article about social learning of free-living animals by Franz and Nunn (2009). I assume gangsters learn about the condolence gossip either through their personal networks or through an external source.
In the first scenario, I must kill 13 historically well-known gangsters from a list published online. In the second scenario, I must remove the 30 most central gangsters, whereas in the last scenario, I have to randomly delete 30 gangsters from the mafia network. At the end, I compare the s-curves for social learning and asocial learning for each scenario. Obviously, the innovation is the inclusion of attrition to diffusion models on an original setting, the Italian mafia network in the U.S.
My question is how to create the appropriate temporal network object which works with Dai Shizuka’s code.The code isn't designed for temporal longitudinal networks.
References
Albert, R., Jeong, H. and Barabási, A.L., 2000. Error and attack tolerance of complex networks. nature, 406(6794), pp.378-382.
Dai Shizuka, 2023. “8. Intro to diffusion on networks.” Available at: https://dshizuka.github.io/networkanalysis/08_diffusion.html. Accessed: 8 February, 2023.
Franz, M. and Nunn, C.L., 2009. Network-based diffusion analysis: a new method for detecting social learning. Proceedings of the Royal Society B: Biological Sciences, 276(1663), pp.1829-1836.
I’ve also tried modeling the process with the alternative networkDynamic and netdiffuseR packages, yet was rather dissatisfied with the final presentation product and the way the diffusion plots were presented in netdiffuseR. I also report some issues with running the toy diffusion example in networkDynamic, p. 45.
Because there are too many diffusion models for various social phenomena in the social networks literature, we really must know the specificities of the various diffusion models and the network mechanisms behind them. For instance, is the process of gossip diffusion like the process of STDs diffusion? Why isn't that the case?
I am trying to parse a JSON raw string using json.loads but its throwing following error
JSONDecodeError at /octopus/entries/53/test_sample_job/
Expecting ',' delimiter: line 3 column 27 (char 48)
My json string data is as follows and I am getting error when i did json.loads(data)
data = {
"name": "Shea",
"content": "<p style="text-align:left">Job Description</p><p style="text-align:inherit"></p>Have you heard about phenom? phenom is an innovative, global healthcare leader committed to improving health and well-being in 140 countries around the world. We continue to focus our research on conditions that affect millions of people around the world - diseases like Alzheimer's, Diabetes and Cancer - while further expanding our strengths in areas such as vaccines and biologics. We aspire to be the best healthcare company in the world and are dedicated to providing leading innovations and solutions for tomorrow. phenom’s Global Human Health (GHH) Division abides by a “patient first, profits later” ideology. Results-driven and ambitious, this team of individuals represents a functional balance between meeting company objectives and the needs of people around the world. The division is comprised of sales and marketing professionals who are passionate about their role in bringing phenom's prescription medicines, vaccines, and other medical products to our customers worldwide. Who are we looking for? A strong Professional for the position of the Hospital Specialist in Oncology who is responsible for promoting oncology brands within given accounts. On this position you would need to understand customers’ needs and have strong business acumen. It is important YOU are being equipped with excellent medical knowledge where you can transfer key medical data into customer / patient benefits. Expected Qualification of YOURS: - University degree preferably life science - 3+ years of experience working in a customer-facing role - Strong knowledge of customer/business strategy - Understanding of local healthcare and reimbursement systems - English language preferred Key competencies: - Customer & Market Insights: Ability to develop a deep understanding of customer needs, behaviours and goals, as well as market dynamics, competitor analysis and trends to improve overall business outcomes. - Customer Engagement: Ability to identify and appropriately build and maintain long-term, sustainable relationships with customers, external stakeholders and key influencers through a variety of relationship-building approaches. - Strategic Business Management: Ability to set strategic plans, consider execution trade-offs and continuously adjust approaches to maximize business performance and increase sales. - Excellent medical Product knowledge – excellent evidence-basedmedicine data knowledge. Ability to transfer medical data into customer / patients benefits Skills Required: - Driving license B - Advance Medical knowledge of oncology therapy area preferably or demonstrate high learning agility and interest in evidence-based medicine data and ability to transfer them into customer / patients benefits Leadership behaviours: - Drive result - Focus on customer and patient - Demonstrate ethics and integrity - High learning agility YOUR primary activities include but are not limited to: Account Understanding and Analysis - Understanding decision-making processes within the account, patient flow - Identifying Account Stakeholders and understanding their perspectives on phenom, our competitors - Completing a competitor analysis for the account - Obtaining an in-depth understanding of the account’s unmet and evolving needs Account Plan Development - Identifying short and long-term business opportunities. - Defining objectives for the account - Developing a plan for the Account that contains the account needs and perspectives as well as considers competitive and business challenges - Determining how to appropriately leverage cross-functional internal resources to maximize potential - Defining account metrics and a tracking plan Account Plan Implementation and Tracking - Developing and maintaining long-term engagements with customers/stakeholders within the Accounts that are responsible for treatment of the respective patients (all relevant HCPs) as well as product purchasing (hospital management, pharmacists) - Conducting product and value-based negotiation <p style="text-align:inherit"></p><p style="text-align:left"><b>English Job Description:</b></p><p style="text-align:inherit"></p><p style="text-align:inherit"></p><p></p><p><b>Search Firm Representatives Please Read Carefully </b><br>phenom & Co., Inc., Kenilworth, NJ, USA, also known as phenom phenom & phenom Corp., Kenilworth, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. </p><p style="text-align:inherit"></p><p style="text-align:left"><b>Employee Status: </b></p>Regular<p style="text-align:inherit"></p><p style="text-align:left"><b>Relocation:</b></p><p style="text-align:inherit"></p><p style="text-align:left"><b>VISA Sponsorship:</b></p><p style="text-align:inherit"></p><p style="text-align:left"><b><span>Travel Requirements:</span></b></p><p style="text-align:inherit"></p><p style="text-align:left"><b>Flexible Work Arrangements:</b></p><p style="text-align:inherit"></p><p style="text-align:left"><b>Shift:</b></p><p style="text-align:inherit"></p><p style="text-align:left"><b>Valid Driving License:</b></p><p style="text-align:inherit"></p><p style="text-align:left"><b>Hazardous Material(s):</b></p><p style="text-align:inherit"></p><p style="text-align:left"><b>Number of Openings: </b></p>1",
"street_name": "Bartol",
"city": "Brentwood",
"country": "Slovenia",
"continent": "Europe"
}
I observed that the issue is happening in the content value where I am having HTML code that contains double quotes. I have tried various solutions like json.loads(repr(data)) and html.escape(json.loads(data)) but still my error is not getting resolved.
I even tried to replace all the double quotes with empty string but then it threw an error saying too many values to unpack
I wanted to try this solution but unable to figure out how to use r in this case as my string is stored in a variable data.
Please suggest a workaround for this inorder to parse this json.
You have double quotes within the quotes of the string for the content. Use a """ for the content to use a string literal.
Or add an escape character for every quote mark as \" within the content string.
data = {
"name": "Shea",
"content": """<p style="text-align:left">Job Description</p><p style="text-align:inherit"></p>Have you heard about phenom? phenom is an innovative, global healthcare leader committed to improving health and well-being in 140 countries around the world. We continue to focus our research on conditions that affect millions of people around the world - diseases like Alzheimer's, Diabetes and Cancer - while further expanding our strengths in areas such as vaccines and biologics. We aspire to be the best healthcare company in the world and are dedicated to providing leading innovations and solutions for tomorrow. phenom’s Global Human Health (GHH) Division abides by a “patient first, profits later” ideology. Results-driven and ambitious, this team of individuals represents a functional balance between meeting company objectives and the needs of people around the world. The division is comprised of sales and marketing professionals who are passionate about their role in bringing phenom's prescription medicines, vaccines, and other medical products to our customers worldwide. Who are we looking for? A strong Professional for the position of the Hospital Specialist in Oncology who is responsible for promoting oncology brands within given accounts. On this position you would need to understand customers’ needs and have strong business acumen. It is important YOU are being equipped with excellent medical knowledge where you can transfer key medical data into customer / patient benefits. Expected Qualification of YOURS: - University degree preferably life science - 3+ years of experience working in a customer-facing role - Strong knowledge of customer/business strategy - Understanding of local healthcare and reimbursement systems - English language preferred Key competencies: - Customer & Market Insights: Ability to develop a deep understanding of customer needs, behaviours and goals, as well as market dynamics, competitor analysis and trends to improve overall business outcomes. - Customer Engagement: Ability to identify and appropriately build and maintain long-term, sustainable relationships with customers, external stakeholders and key influencers through a variety of relationship-building approaches. - Strategic Business Management: Ability to set strategic plans, consider execution trade-offs and continuously adjust approaches to maximize business performance and increase sales. - Excellent medical Product knowledge – excellent evidence-basedmedicine data knowledge. Ability to transfer medical data into customer / patients benefits Skills Required: - Driving license B - Advance Medical knowledge of oncology therapy area preferably or demonstrate high learning agility and interest in evidence-based medicine data and ability to transfer them into customer / patients benefits Leadership behaviours: - Drive result - Focus on customer and patient - Demonstrate ethics and integrity - High learning agility YOUR primary activities include but are not limited to: Account Understanding and Analysis - Understanding decision-making processes within the account, patient flow - Identifying Account Stakeholders and understanding their perspectives on phenom, our competitors - Completing a competitor analysis for the account - Obtaining an in-depth understanding of the account’s unmet and evolving needs Account Plan Development - Identifying short and long-term business opportunities. - Defining objectives for the account - Developing a plan for the Account that contains the account needs and perspectives as well as considers competitive and business challenges - Determining how to appropriately leverage cross-functional internal resources to maximize potential - Defining account metrics and a tracking plan Account Plan Implementation and Tracking - Developing and maintaining long-term engagements with customers/stakeholders within the Accounts that are responsible for treatment of the respective patients (all relevant HCPs) as well as product purchasing (hospital management, pharmacists) - Conducting product and value-based negotiation <p style="text-align:inherit"></p><p style="text-align:left"><b>English Job Description:</b></p><p style="text-align:inherit"></p><p style="text-align:inherit"></p><p></p><p><b>Search Firm Representatives Please Read Carefully </b><br>phenom & Co., Inc., Kenilworth, NJ, USA, also known as phenom phenom & phenom Corp., Kenilworth, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. </p><p style="text-align:inherit"></p><p style="text-align:left"><b>Employee Status: </b></p>Regular<p style="text-align:inherit"></p><p style="text-align:left"><b>Relocation:</b></p><p style="text-align:inherit"></p><p style="text-align:left"><b>VISA Sponsorship:</b></p><p style="text-align:inherit"></p><p style="text-align:left"><b><span>Travel Requirements:</span></b></p><p style="text-align:inherit"></p><p style="text-align:left"><b>Flexible Work Arrangements:</b></p><p style="text-align:inherit"></p><p style="text-align:left"><b>Shift:</b></p><p style="text-align:inherit"></p><p style="text-align:left"><b>Valid Driving License:</b></p><p style="text-align:inherit"></p><p style="text-align:left"><b>Hazardous Material(s):</b></p><p style="text-align:inherit"></p><p style="text-align:left"><b>Number of Openings: </b></p>1""",
"street_name": "Bartol",
"city": "Brentwood",
"country": "Slovenia",
"continent": "Europe"
}
Is there any way to add different styles for columns made with column-count? I have a div which is divided into multiple columns using column-count. At a time only two columns are visible on page. I need to add margin-left to the first column and margin-right for the second column and so on.
What I need is the same spacing on both (outer and inner) sides of the pages just like book.
.main {
overflow: scroll;
width: 100%;
height: 438px;
column-gap: 160px;
columns: 2 auto;
column-fill: auto;
margin-top: 5px;
}
<div class="main">
Wikidata is a free, collaborative, multilingual, secondary database, collecting structured data to provide support for Wikipedia, Wikimedia Commons, the other wikis of the Wikimedia movement, and to anyone in the world. What does this mean? Let's look
at the opening statement in more detail: Contents 1 What does this mean? 2 How does Wikidata work? 2.1 The Wikidata repository 2.2 Working with Wikidata 3 Where to get started 4 How can I contribute? 5 There is more to come Free. The data in Wikidata
is published under the Creative Commons Public Domain Dedication 1.0, allowing the reuse of the data in many different scenarios. You can copy, modify, distribute and perform the data, even for commercial purposes, without asking for permission. Collaborative.
Data is entered and maintained by Wikidata editors, who decide on the rules of content creation and management. Automated bots also enter data into Wikidata. Multilingual. Editing, consuming, browsing, and reusing the data is fully multilingual. Data
entered in any language is immediately available in all other languages. Editing in any language is possible and encouraged. A secondary database. Wikidata records not just statements, but also their sources, and connections to other databases. This
reflects the diversity of knowledge available and supports the notion of verifiability. Collecting structured data. Imposing a high degree of structured organization allows for easy reuse of data by Wikimedia projects and third parties, and enables
computers to process and “understand” it. Support for Wikimedia wikis. Wikidata assists Wikipedia with more easily maintainable information boxes and links to other languages, thus reducing editing workload while improving quality. Updates in one language
are made available to all other languages. Anyone in the world. Anyone can use Wikidata for any number of different ways by using its application programming interface. How does Wikidata work? This diagram of a Wikidata item shows you the most important
terms in Wikidata. Wikidata is a central storage repository that can be accessed by others, such as the wikis maintained by the Wikimedia Foundation. Content loaded dynamically from Wikidata does not need to be maintained in each individual wiki project.
For example, statistics, dates, locations and other common data can be centralized in Wikidata. The Wikidata repository Items and their data are interconnected. The Wikidata repository consists mainly of items, each one having a label, a description
and any number of aliases. Items are uniquely identified by a Q followed by a number, such as Douglas Adams (Q42). Statements describe detailed characteristics of an Item and consist of a property and a value. Properties in Wikidata have a P followed
by a number, such as with educated at (P69). For a person, you can add a property to specifying where they were educated, by specifying a value for a school. For buildings, you can assign geographic coordinates properties by specifying longitude and
latitude values. Properties can also link to external databases. A property that links an item to an external database, such as an authority control database used by libraries and archives, is called an identifier. Special Sitelinks connect an item
to corresponding content on client wikis, such as Wikipedia, Wikibooks or Wikiquote. All this information can be displayed in any language, even if the data originated in a different language. When accessing these values, client wikis will show the
most up-to-date data. Item Property Value Q42 P69 Q691283 Douglas Adams educated at St John's College Working with Wikidata There are a number of ways to access Wikidata using built-in tools, external tools, or programming interfaces. Wikidata Query
and Reasonator are some of the popular tools to search for and examine Wikidata items. The tools page has an extensive list of interesting projects to explore. Client wikis can access data for their pages using a Lua Scribunto interface. You can retrieve
all data independently using the Wikidata API. Where to get started The Wikidata tours designed for new users are the best place to learn more about Wikidata. Some links to get started: Set your user options, especially the 'Babel' extension, to choose
your language preferences Help with missing labels and descriptions Help with interwiki conflicts and constraint violations Improve a random item Help translating How can I contribute? Go ahead and start editing. Editing is the best way to learn about
the structure and concepts of Wikidata. If you would like to gain understanding of Wikidata's concepts upfront, you may want to have a look at the help pages. If you have questions, please feel free to drop them in the project chat or contact the development
team. There is more to come Wikidata is an ongoing project that is under active development. More data types as well as extensions will be available in the future. You can find more information about Wikidata and its ongoing development on the Wikidata
page on Meta. Subscribe to the the Wikidata mailing list to receive up-to-date information about the development and to participate in discussions about the future of the project. North Korea conducted its sixth nuclear test on 3 September 2017, according
to Japanese and South Korean officials. The Japanese Ministry of Foreign Affairs also concluded that North Korea conducted a nuclear test.[6] The United States Geological Survey reported an earthquake of 6.3-magnitude not far from North Korea's Punggye-ri
nuclear test site.[7] South Korean authorities said the earthquake seemed to be artificial, consistent with a nuclear test.[6] The USGS, as well as China's earthquake administration, reported that the initial event was followed by a second, smaller,
earthquake at the site, several minutes later, which was characterized as a collapse of the cavity.[8][9] North Korea claimed that it detonated a hydrogen bomb that can be loaded on to an intercontinental ballistic missile (ICBM) with great destructive
power.[10] Photos of North Korean leader Kim Jong-un inspecting a device resembling a thermonuclear weapon warhead were released a few hours before the test.[11] Contents 1 Yield estimates 2 Reactions 3 See also 4 References Yield estimates According
to estimates of Kim Young-Woo, the chief of the South Korean parliament's defense committee, the nuclear yield was equivalent to about 100 kilotons of TNT (100 kt). "The North's latest test is estimated to have a yield of up to 100 kilotons, though
it is a provisional report," Kim Young-Woo told Yonhap News Agency.[2] On 3 September, South Korea’s weather agency, the Korea Meteorological Administration, estimated that the nuclear blast yield of the presumed test was between 50 to 60 kilotons.[3]
On 4 September, the academics from University of Science and Technology of China[12] have released their findings based on seismic results and concluded that the Nuclear Test Location is at 41°17′53.52″N 129°4′27.12″E on 03:30 UTC which is only a few
hundred meters apart from the previous 4 tests (2009, 2013, January 2016 and September 2016) with the estimated yield at 108.1 ± 48.1 kt. In contrast, the independent seismic monitoring agency NORSAR estimated that the blast had a yield of about 120
kilotons, based on a seismic magnitude of 5.8.[4] The Federal Institute for Geosciences and Natural Resources in Germany estimates a higher yield at "a few hundred kilotons" based on a detected tremor of 6.1 magnitude.[5] Reactions South Korea, China,
Japan, Russia and members of the ASEAN[13] voiced strong criticism of the nuclear test.[14] US President Donald Trump tweeted "North Korea has conducted a major Nuclear Test. Their words and actions continue to be very hostile and dangerous to the United
States".[15][16] Trump was asked whether the US would attack North Korea and replied, "We'll see".[17] On September 3, U.S. Defense Secretary James Mattis warned North Korea, saying that the country would be met with a "massive military response" if
it threatened the United States or its allies.[18] The United Nations Security Council will meet in an open emergency meeting on September 4, 2017 at the request of the US, South Korea, Japan, France and the UK.[19] Federal Institute for Geosciences
and Natural Resources From Wikipedia, the free encyclopedia (Redirected from Bundesanstalt für Geowissenschaften und Rohstoffe) Federal Institute for Geosciences and Natural Resources Bundesanstalt für Geowissenschaften und Rohstoffe (BGR) Agency overview
Headquarters Hanover, Germany Employees 795 in 2013 Website www.bgr.bund.de The Federal Institute for Geosciences and Natural Resources (Bundesanstalt für Geowissenschaften und Rohstoffe or BGR) is a German agency within the Federal Ministry of Economics
and Technology. It acts as a central geoscience consulting institution for the German federal government.[1] The headquarters of the agency is located in Hanover and there is a branch in Berlin. Early 2013, the BGR employed a total of 795 employees.
The BGR, the State Authority for Mining, Energy and Geology and the Leibniz Institute for Applied Geophysics form the Geozentrum Hanover. All three institutions have a common management and infrastructure, and complement each other through their interdisciplinary
expertise.
</div>
Here is the JSFiddle for testing link
When I have using json_decode(someJsonObject) I can generate an array and read off the attributes. However, when one of these attributes contain text of around 821 characters, i cannot generate an array. And the json_decode appears not to recognise an object even though I have validated the json representation (via http://jsonlint.com/). Please can some one help?
My json object that I cannot decode:
[{"abstractText":"Ebola viruses and Marburg viruses include some of the most virulent and fatal pathogens known to humans. These viruses cause severe haemorrhagic fevers, with case fatality rates in the range 25-90%. The diagnosis of filovirus using formalin-fixed tissues from fatal cases poses a significant challenge. The most characteristic histopathological findings are seen in the liver; however, the findings overlap with many other viral and non-viral haemorrhagic diseases. The need to distinguish filovirus infections from other haemorrhagic fevers, particularly in areas with multiple endemic viral haemorrhagic agents, is of paramount importance. In this review we discuss the current state of knowledge of filovirus infections and their pathogenesis, including histopathological findings, epidemiology, modes of transmission and filovirus entry and spread within host organisms. The pathogenesis of filovirus infections is complex and involves activation of the mononuclear phagocytic system, with release of pro-inflammatory cytokines, chemokines and growth factors, endothelial dysfunction, alterations of the innate and adaptive immune systems, direct organ and endothelial damage from unrestricted viral replication late in infection, and coagulopathy. Although our understanding of the pathogenesis of filovirus infections has rapidly increased in the past few years, many questions remain unanswered. Copyright © 2014 Pathological Society of Great Britain and Ireland. Published by John Wiley \u0026 Sons, Ltd.","authorString":"Martines RB, Ng DL, Greer PW, Rollin PE, Zaki SR.","issue":"2","journalTitle":"J. Pathol.","pageInfo":"153-174","pmid":"25297522","pubYear":"2015","title":"Tissue and cellular tropism, pathology and pathogenesis of Ebola and Marburg viruses.","volume":"235"},{"abstractText":"Good medical ethics needs to look more to the resources of public health ethics and use more societal, population or community values and perspectives, rather than defaulting to the individualistic values that currently dominate discussion. In this paper I argue that we can use the recent response to Ebola as an example of a major failure of the global community in three ways. First, the focus has been on the treatment of individuals rather than seeing that the priority ought to be public health measures. Second, the advisory committee on experimental interventions set up by the WHO has focused on ethical issues related to individuals and their guidance has been unclear. Third, the Ebola issue can be seen as a symptom of a massive failure of the global community to take sufficient notice of global injustice.","authorString":"Dawson AJ.","issue":"1","journalTitle":"J Med Ethics","pageInfo":"107-110","pmid":"25516949","pubYear":"2015","title":"Ebola: what it tells us about medical ethics.","volume":"41"}]
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm an undergrad who finds computer vision to be fascinating. Where should somebody brand new to computer vision begin?
Check out this book
http://research.microsoft.com/en-us/um/people/szeliski/book/
it is in beta stage right now and available for free.
Richard Szeliski, the author, is a a well known researcher in the field of computer vision. He is also behind the Photosynth project.
Get your hands dirty! What language do you program in? I would recommend looking at OpenCV, which is an open source library that comes with many functions you can use to build interesting systems. It is written for C++ but also has bindings for Python. It comes with many demos that you can run right away and hack around with.
For complete overview of the field books are the best way to go.
For any particular topic you want to know more about, survey papers found through Google Scholar are the way to go.
For most recent research, look at papers from CVPR, which is a vision conference:
http://www.cvpapers.com/cvpr2010.html
You definitely need a solid math background: calculus, linear algebra, signal processing, probability and statistics.
You also need to understand what specific problems are studied in computer vision: recognizing an image of a particular object, recognizing a general class of objects ("cars"), detecting whether an object is present in an image, locating an object in an image, tracking moving objects in video, reconstructing a 3D object or scene from an image or a set of images, etc.
I was once told by a professor of a good way to get into a new field. Go to the library, find the main journal for that field, and start reading abstracts to papers, until you get the lingo. In the case of computer vision, good journals to look at are IEEE Transations of Pattern Analysis and Machine Intelligence, aka PAMI, and International Journal of Computer Vision (aka IJCV). By the way, the two major conferences in computer vision are CVPR (IEEE International Conference on Computer Vision and Pattern Recognition) and ICCV (International Conference on Computer Vision).
Topics that are related or heavily overlap with vision are image processing and machine learning.
If there is a course in computer vision offered at your school, take it. Get some books on the subjects I've mentioned. If there is vision-related conference near where you live, sneak in and look at the posters.
Oh, and Matlab is a great environment to play with image processing and vision algorithms.
Some resources:
Learning about Computer Vision
Must have background on signal processing methods - Transform - Fourier - Hough -etc
May use a better environment such as MATLAB for image processing
Pattern classification methods
Neural Networks is an important and widely use tool in Computer Vision
As with all other things at school.... start by taking up a course with a good amount of project work. Explore ideas and implement algorithms in those projects that you find interesting. Wikipedia is a good beginners resource as usual. If you want books, the most popular ones are:
http://www.amazon.com/Multiple-View-Geometry-Computer-Vision/dp/0521540518
http://www.amazon.com/Computer-Vision-Approach-David-Forsyth/dp/0130851981/
http://research.microsoft.com/en-us/um/people/szeliski/book/drafts/SzeliskiBook_20100423_draft.pdf
But I would suggest before you jump in to books, take a course/go through some course slides at one of the top ten universities or via iTunesU.
I found this guide to be pretty good at introducing the novice to computer vision, but you really need to go for a MS for that. Electrical and Computer Engineering Departments offer it under a Digital Signal Processing Program, from which you can choose to specialize in Machine Vision or Digital Imaging (whatever they may call it).
SOCIETY OF ROBOTS - COMPUTER VISION TUTORIAL