Home
Search results “Correlating questionnaire data mining”
SPSS Questionnaire/Survey Data Entry - Part 1
 
04:27
How to enter and analyze questionnaire (survey) data in SPSS is illustrated in this video. Lots more Questionnaire/Survey & SPSS Videos here: https://www.udemy.com/survey-data/?couponCode=SurveyLikertVideosYT Check out our next text, 'SPSS Cheat Sheet,' here: http://goo.gl/b8sRHa. Prime and ‘Unlimited’ members, get our text for free. (Only 4.99 otherwise, but likely to increase soon.) Survey data Survey data entry Questionnaire data entry Channel Description: https://www.youtube.com/user/statisticsinstructor For step by step help with statistics, with a focus on SPSS. Both descriptive and inferential statistics covered. For descriptive statistics, topics covered include: mean, median, and mode in spss, standard deviation and variance in spss, bar charts in spss, histograms in spss, bivariate scatterplots in spss, stem and leaf plots in spss, frequency distribution tables in spss, creating labels in spss, sorting variables in spss, inserting variables in spss, inserting rows in spss, and modifying default options in spss. For inferential statistics, topics covered include: t tests in spss, anova in spss, correlation in spss, regression in spss, chi square in spss, and MANOVA in spss. New videos regularly posted. Subscribe today! YouTube Channel: https://www.youtube.com/user/statisticsinstructor Video Transcript: In this video we'll take a look at how to enter questionnaire or survey data into SPSS and this is something that a lot of people have questions with so it's important to make sure when you're working with SPSS in particular when you're entering data from a survey that you know how to do. Let's go ahead and take a few moments to look at that. And here you see on the right-hand side of your screen I have a questionnaire, a very short sample questionnaire that I want to enter into SPSS so we're going to create a data file and in this questionnaire here I've made a few modifications. I've underlined some variable names here and I'll talk about that more in a minute and I also put numbers in parentheses to the right of these different names and I'll also explain that as well. Now normally when someone sees this survey we wouldn't have gender underlined for example nor would we have these numbers to the right of male and female. So that's just for us, to help better understand how to enter these data. So let's go ahead and get started here. In SPSS the first thing we need to do is every time we have a possible answer such as male or female we need to create a variable in SPSS that will hold those different answers. So our first variable needs to be gender and that's why that's underlined there just to assist us as we're doing this. So we want to make sure we're in the Variable View tab and then in the first row here under Name we want to type gender and then press ENTER and that creates the variable gender. Now notice here I have two options: male and female. So when people respond or circle or check here that they're male, I need to enter into SPSS some number to indicate that. So we always want to enter numbers whenever possible into SPSS because SPSS for the vast majority of analyses performs statistical analyses on numbers not on words. So I wouldn't want and enter male, female, and so forth. I want to enter one's, two's and so on. So notice here I just arbitrarily decided males get a 1 and females get a 2. It could have been the other way around but since male was the first name listed I went and gave that 1 and then for females I gave a 2. So what we want to do in our data file here is go head and go to Values, this column, click on the None cell, notice these three dots appear they're called an ellipsis, click on that and then our first value notice here 1 is male so Value of 1 and then type Label Male and then click Add. And then our second value of 2 is for females so go ahead and enter 2 for Value and then Female, click Add and then we're done with that you want to see both of them down here and that looks good so click OK. Now those labels are in here and I'll show you how that works when we enter some numbers in a minute. OK next we have ethnicity so I'm going to call this variable ethnicity. So go ahead and type that in press ENTER and then we're going to the same thing we're going to create value labels here so 1 is African-American, 2 is Asian-American, and so on. And I'll just do that very quickly so going to Values column, click on the ellipsis. For 1 we have African American, for 2 Asian American, 3 is Caucasian, and just so you can see that here 3 is Caucasian, 4 is Hispanic, and other is 5, so let's go ahead and finish that. Four is Hispanic, 5 is other, so let's go to do that 5 is other. OK and that's it for that variable. Now we do have it says please state I'll talk about that next that's important when they can enter text we have to handle that differently.
Views: 393689 Quantitative Specialists
How To... Calculate Pearson's Correlation Coefficient (r) by Hand
 
09:26
Step-by-step instructions for calculating the correlation coefficient (r) for sample data, to determine in there is a relationship between two variables.
Views: 320315 Eugene O'Loughlin
Survey Data Analysis using Google Form surveys
 
08:19
Survey Data Analysis using Google Form surveys
Views: 26328 HiMrBogle
Survey Correlation
 
02:16
Alan Jackson, President and CEO of The Jackson Group, talks about the benefits of correlating data from different surveys. Surveying, while important, is only the first step. Knowing what to do with the data and how it impacts and meshes with data from other areas is key.
Views: 888 JacksonGroupInc
Excel Data Analysis ToolPak - Building a Correlation Matrix
 
09:10
We demonstrate installing Data Analysis ToolPak excel addin and how to build a Karl Pearson Correlation Matrix easily. The data set used can be downloaded from http://www.learnanalytics.in/blog/?p=150 . Please subscribe to our channel to receive updates and also join our Linkedin Group for latest training videos and articles @http://www.linkedin.com/groups/Learn-Analytics-step-time-4240061
Views: 108711 Learn Analytics
Creating Correlation Table Using Data Analysis in Excel
 
09:40
In this video, I will show you how to use Data Analysis tool in MS Excel to create correlation table of multiple numerical variables. We use Boston Housing dataset for demonstration. Please let me know if you have any questions. Thanks.
Views: 2640 IT_CHANNEL
Splitting a Continuous Variable into High and Low Values
 
03:53
In this video I show you how to create a new categorical variable from a continuous variable (e.g., high and low age). This is also known as a 'median split' approach.
Views: 46655 James Gaskin
Correlation, Positive, Negative, None, and Correlation Coefficient
 
09:56
http://www.gdawgenterprises.com This video demonstrates different categories or types of correlation, including positive correlation, negative correlation, and no correlation. Also, strength of correlation is shown. The graphing calculator is used to quantify the strength of correlation by finding the correlation coefficient, sometimes called R.
Views: 40288 gdawgrapper
Principal Components Analysis - SPSS (part 1)
 
05:06
I demonstrate how to perform a principal components analysis based on some real data that correspond to the percentage discount/premium associated with nine listed investment companies. Based on the results of the PCA, the listed investment companies could be segmented into two largely orthogonal components.
Views: 179522 how2stats
Testing for correlations in data with Excel
 
04:57
Learn how to carry out tests for correlations in data using Microsoft Excel, including the Spearman’s rank correlation, and Pearson’s product moment correlation. https://global.oup.com/academic/product/research-methods-for-the-biosciences-9780198728498 This video relates to section 9.5 in the book Research Methods for the Biosciences third edition by Debbie Holmes, Peter Moody, Diana Dine, and Laurence Trueman. The video is narrated by Laurence Trueman. © Oxford University Press
Analysis of Covariance (ANCOVA) - SPSS (part 1)
 
05:05
I demonstrate how to perform an analysis of covariance (ANCOVA) in SPSS. The first part of the series is relevant to the ANCOVA tested through the conventional approach to doing so by getting SPSS to estimate adjusted means through the GLM univariate utility. In the second part of the series, I demonstrate the exact correspondence between ANCOVA and multiple regression. NB: The results of the analysis in this series found that males appear to have larger cranial capacities than females, even after controlling for the effects of body size. However, it should be important to emphasize that research has found that there are little to no general mean differences in IQ between males and females. Furthermore, there is neuroanatomical research to suggest that female brains appear to have more neurons per cubic cm than male brains. Thus, the difference in cranial capacity/brain size between the sexes may be counteracted by the differences in neuronal density.
Views: 214178 how2stats
Using twitter to predict heart disease | Lyle Ungar | TEDxPenn
 
15:13
Can Twitter predict heart disease? Day in and day out, we use social media, making it the center of our social lives, work lives, and private lives. Lyle Ungar reveals how our behavior on social media actually reflects aspects about our health and happiness. Lyle Ungar is a professor of Computer and Information Science and Psychology at the University of Pennsylvania and has analyzed 148 million tweets from more than 1,300 counties that represent 88 percent of the U.S. population. His published research has been focused around the area of text mining. He has published over 200 articles and holds eleven patents. His current research deals with statistical natural language processing, spectral methods, and the use of social media to understand the psychology of individuals and communities. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at http://ted.com/tedx
Views: 3688 TEDx Talks
Pearson's r Correlation
 
04:03
statisticslectures.com - where you can find free lectures, videos, and exercises, as well as get your questions answered on our forums!
Views: 158991 statslectures
SPSS: Analyzing Subsets and Groups
 
10:14
Instructional video on how to analyze subsets and groups of data using SPSS, statistical analysis and data management software. For more information, visit SSDS at https://ssds.stanford.edu.
Interpreting correlation coefficients in a correlation matrix
 
05:55
/learn how to interpret a correlation matrix. http://youstudynursing.com/ Research eBook: http://amzn.to/1hB2eBd Related Videos: http://www.youtube.com/playlist?list=PLs4oKIDq23Ac8cOayzxVDVGRl0q7QTjox A correlation matrix displays the correlation coefficients among numerous variables in a research study. This type of matrix will appear in hypothesis testing or exploratory quantitative research studies, which are designed to test the relationships among variables. In order to interpret this matrix you need to understand how correlations are measured. Correlation coefficients always range from -1 to +1. The positive or negative sign tells you the direction of the relationship and the number tells you the strength of the relationship. The most common way to quantify this relationship is the Pearson product moment correlation coefficient (Munro, 2005). Mathematically it is possible to calculate correlations with any level of data. However, the method of calculating these correlations will differ based on the level of the data. Although Pearson's r is the most commonly used correlation coefficient, Person's r is only appropriate for correlations between two interval or ratio level variables. When examining the formula for Person's r it is evident that part of the calculation relies on knowing the difference between individual cases and the mean. Since the distance between values is not known for ordinal data and a mean cannot be calculated, Pearson's r cannot be used. Therefore another method must be used. ... Recall that correlations measure both the direction and strength of a linear relationship among variables. The direction of the relationship is indicated by the positive or negative sign before the number. If the correlation is positive it means that as one variable increases so does the other one. People who tend to score high for one variable will also tend to score high for another varriable. Therefore if there is a positive correlation between hours spent watching course videos and exam marks it means that people who spend more time watching the videos tend to get higher marks on the exam. Remember that a positive correlation is like a positive relationship, both people are moving in the same direction through life together. If the correlation is negative it means that as one variable increases the other decreases. People who tend to score high for one variable will tend to score low for another. Therefore if there is a negative correlation between unmanaged stress and exam marks it means that people who have more unmanaged stress get lower marks on their exam. Remember that A negative correlation is like a negative relationship, the people in the relationship are moving in opposite directions. Remember that The sign (positive or negative) tells you the direction of the relationship and the number beside it tells you how strong that relationship is. To judge the strength of the relationship consider the actual value of the correlation coefficient. Numerous sources provide similar ranges for the interpretation of the relationships that approximate the ranges on the screen. These ranges provide guidelines for interpretation. If you need to memorize these criteria for a course check the table your teacher wants you to learn. Of course, the higher the number is the stronger the relationship is. In practice, researchers are happy with correlations of 0.5 or higher. Also note that when drawing conclusions from correlations the size of the sample as well as the statistical significance is considered. Remember that the direction of the relationship does not affect the strength of the relationship. One of the biggest mistakes people make is assuming that a negative number is weaker than a positive number. In fact, a correlation of -- 0.80 is just as high or just as strong as a correlation of +0.80. When comparing the values on the screen a correlation of -0.75 is actually stronger than a correlation of +0.56. ... Notice that there are correlations of 1 on a diagonal line across the table. That is because each variable should correlate perfectly with itself. Sometimes dashes are used instead of 1s. In a correlation matrix, typically only one half of the triangle is filled out. That is because the other half would simply be a mirror image of it. Examine this correlation matrix and see if you can identify and interpret the correlations. A great question for an exam would be to give you a correlation matrix and ask you to find and interpret correlations. What is the correlation between completed readings and unmanaged stress? What does it mean? Which coefficient gives you the most precise prediction? Which correlations are small enough that they would not be of much interest to the researcher? Which two correlations have the same strength? From looking at these correlations, what could a student do to get a higher mark on an exam? Comment below to start a conversation.
Views: 44820 NurseKillam
Correlations Google Doc
 
08:28
Create a correlation matrix between any two stocks using Google Docs. This is a much more flexible platform than excel as data and dates are updated for you.
Views: 3540 Al On Options
How To... Calculate a Correlation Coefficient (r) in Excel 2010
 
10:22
Learn how to use the CORREL function and to manually calculate the correlation coefficient (r) in Excel 2010. This allows you to examine is there is a statistical correlation between two variables. Please note: Correlation is NOT causation!
Views: 137229 Eugene O'Loughlin
SPSS for newbies: Select cases, removing outliers, data cleaning
 
17:24
SPSS tutorial/guide How to remove outliers in SPSS How to select a part of the data to analyze in SPSS (proper term is selecting a subset, or selecting cases to analyze) How to clean your data in SPSS
Views: 111680 Phil Chan
Correlation Analysis ROI Digital Marketing
 
06:07
In a perfect world, we could calculate ROI from individual activities such as SEO, Social Media, website design, brochures, cooperative marketing efforts, etc. But consumer purchase decisions often take in to account multiple “touch points” before a buying decision is made. Also, the buying cycle may be 3-12 months; our client’s do sell milk, eggs, or bread. How do you connect today’s marketing activities with a purchase 6 months from now? Calculating ROI should be a long-term effort. If clients shared monthly sales data we could correlate sales with various metrics such as (1) Website visits, (2) social media metrics, (3) leads from website forms, and so on.
Re-Assigned Incidents & Breached Service Level Agreements
 
03:55
Do re-assigned incidents correlate with a higher percentage of SLA breaches?
Analyzing Student Performance
 
03:30
PowerSchool - How to analyze student performance
Views: 44 Garrett Burgett
Using Formulas with Google Form Responses
 
03:17
This video is brought to you by Profound Cloud Google Forms are one of the easiest ways to collect data from your friends, family, colleagues and more. A great way to make Forms even more powerful is by taking actions upon the responses in a Google Spreadsheet. A lot of people get frustrated when they insert a formula into the responses Sheet, because the function doesn't seem to carry over for every new entry. It can be very time consuming, and frankly pretty irritating to manually extend the function for every new response. This video covers a really easy way to use the QUERY function in conjunction with Google Form responses. You will have to create a new tab within the Responses Sheet, and it's definitely a time saver. To watch the updated video and read the full article on the BetterCloud Monitor, visit: https://www.bettercloud.com/monitor/the-academy/using-formulas-with-google-form-responses/
Views: 56635 The Gooru
Measuring the Economy in a Digital Age
 
01:16:31
Experts discuss methods of economic measurement. Speakers: Diana Farrell, Chief Executive Officer and President, JPMorgan Chase Institute Matthew D. Shapiro, Lawrence R. Klein Collegiate Professor of Economics, University of Michigan; Research Associate, National Bureau of Economic Research Hal R. Varian, Chief Economist, Google Presider: Sebastian Mallaby, Paul A. Volcker Senior Fellow for International Economics, Council on Foreign Relations This symposium is presented by the Maurice R. Greenberg Center for Geoeconomic Studies and is made possible through the generous support of Stephen C. Freidheim.
Views: 660 Council Comm
Yelawolf - American You
 
03:38
Pre-order the album Love Story now On iTunes: http://smarturl.it/YelaLoveStory Google Play: http://smarturl.it/YelaLoveStoryGP Amazon MP3: http://smarturl.it/YelaLoveStoryAmz Sign up for updates: http://smarturl.it/Yelawolf.News Best of Yelawolf: https://goo.gl/vy7NZQ Subscribe here: https://goo.gl/ynkVDL
Views: 31757987 YelawolfVEVO
Demographic Research and data
 
01:40
Let Heritage help you find your marketing audience. We've got the resources and experience to help you narrow down the specifics of who your potential customers are. We'll help you target your efforts to those prospects and save you thousands on wasted materials and postage.
Views: 577 Heritage Integrated
Predictive analytics
 
42:22
Predictive analytics encompasses a variety of statistical techniques from modeling, machine learning, and data mining that analyze current and historical facts to make predictions about future, or otherwise unknown, events. In business, predictive models exploit patterns found in historical and transactional data to identify risks and opportunities. Models capture relationships among many factors to allow assessment of risk or potential associated with a particular set of conditions, guiding decision making for candidate transactions. This video is targeted to blind users. Attribution: Article text available under CC-BY-SA Creative Commons image source in video
Views: 114 Audiopedia
Ways with Words | Big Data || Radcliffe Institute
 
01:26:37
PANEL 2: BIG DATA The Internet, social media, and data mining have changed language and our ability to analyze usage, and increased sensitivities to the power of the words we use. This panel will explore how these new forms of discourse and analysis expand our understanding of the interplay of gender, personal narrative, and language, as well as data scraping that enables a statistical study of language usage by demographics. Ben Hookway (7:43), Chief Executive Officer, Relative Insight Lyle Ungar (20:53), Professor and Graduate Group Chair, Computer and Information Science, University of Pennsylvania Alice E. Marwick (36:19), Assistant Professor, Department of Communication and Media Studies, and Director, McGannon Center for Communication Research, Fordham University Moderator: Rebecca Lemov, Associate Professor of the History of Science, Harvard University Q&A (52:02)
Views: 894 Harvard University
Research to Care 2017 - Morning Session Presentations
 
02:47:04
Watch the morning session of the Research to Care 9/11 Community Engagement Event at NYU Langone Medical Center and hear what we've learned so far about 9/11 health effects from the researchers themselves.
Mod-01 Lec-07B Exploratory Data Analysis – Part B
 
46:19
Statistics for Experimentalists by Dr. A. Kannan,Department of Chemical Engineering,IIT Madras.For more details on NPTEL visit http://nptel.ac.in
Views: 769 nptelhrd
Quantitative marketing research
 
09:06
Quantitative marketing research is the application of quantitative research techniques to the field of marketing. It has roots in both the positivist view of the world, and the modern marketing viewpoint that marketing is an interactive process in which both the buyer and seller reach a satisfying agreement on the "four Ps" of marketing: Product, Price, Place and Promotion. As a social research method, it typically involves the construction of questionnaires and scales. People who respond are asked to complete the survey. Marketers use the information to obtain and understand the needs of individuals in the marketplace, and to create strategies and marketing plans. This video is targeted to blind users. Attribution: Article text available under CC-BY-SA Creative Commons image source in video
Views: 364 Audiopedia
Stanford Webinar: Using Genomics, Wearables and Big Data to Manage Health and Disease
 
41:37
Through genome sequencing, in combination with other omic information such as microbiome, methylome, metabolome, etc., data can be used to genetically predict disease risk. This information, combined with data collected through technology such as wearables, can help people manage disease and maintain healthy lives. Join Dr. Michael Snyder and Dr. Barry Starr as they explore the advances in genomic sequencing and how it can be used to predict, diagnose, and treat disease. You will learn: How genomics can be used to predict disease The poser of longitudinal profiling What data is collected from wearables and how they’re valuable to monitoring health How genome sequencing and big data can impact your health About the Speaker Michael Snyder is the Stanford Ascherman Professor and Chair of Genetics and the Director of the Center of Genomics and Personalized Medicine. Dr. Snyder received his Ph.D. training at the California Institute of Technology and carried out postdoctoral training at Stanford University. He is a leader in the field of functional genomics and proteomics, and one of the major participants of the ENCODE project. His laboratory study was the first to perform a large-scale functional genomics project in any organism, and has launched many technologies in genomics and proteomics that have been used for characterizing genomes, proteomes and regulatory networks.
Views: 3602 stanfordonline
Google Factory Tour of Search
 
02:09:58
Video of Google Factory Tour of Search including announcement of Google Health
Views: 30174 Google
United States Environmental Protection Agency
 
38:49
The United States Environmental Protection Agency (EPA or sometimes USEPA) is an agency of the U.S. federal government which was created for the purpose of protecting human health and the environment by writing and enforcing regulations based on laws passed by Congress. The EPA was proposed by President Richard Nixon and began operation on December 2, 1970, after Nixon signed an executive order. The order establishing the EPA was ratified by committee hearings in the House and Senate. The agency is led by its Administrator, who is appointed by the president and approved by Congress. The current administrator is Gina McCarthy. The EPA is not a Cabinet department, but the administrator is normally given cabinet rank. The EPA has its headquarters in Washington, D.C., regional offices for each of the agency's ten regions, and 27 laboratories. The agency conducts environmental assessment, research, and education. It has the responsibility of maintaining and enforcing national standards under a variety of environmental laws, in consultation with state, tribal, and local governments. It delegates some permitting, monitoring, and enforcement responsibility to U.S. states and the federal recognized tribes. EPA enforcement powers include fines, sanctions, and other measures. The agency also works with industries and all levels of government in a wide variety of voluntary pollution prevention programs and energy conservation efforts. This video is targeted to blind users. Attribution: Article text available under CC-BY-SA Creative Commons image source in video
Views: 1235 Audiopedia
Machine Learning Analytics Software Platform Podcast - Episode 233
 
39:46
Source: https://www.spreaker.com/user/dabcc/machine-learning-analytics-software-plat In episode 233, Douglas Brown interviews Jerry Melnick, Chief Operating Officer at SIOS Technology Corp. Jerry and Douglas discuss the new SIOS iQ machine learning analytics software platform. Jerry does a great job diving deep in to SIOS iQ, what it is, how it works, why we should care and much more! Truly a much listen to episode! About SIOS iQ SIOS iQ is a machine learning analytics software platform designed to be your primary resource for IT operations information and issue resolution. SIOS iQ optimizes VMware environments to ensure business critical application environments are optimized for performance, efficiency, reliability, and capacity. Major features of the standard edition of SIOS iQ include: Performance Root Cause Analysis learns the relationships of objects and their normal patterns of behavior in a VMware infrastructure (hosts, VMs, application, network, storage, etc.); proactively identifies anomalies in behavior and the root causes of performance problems in any application; and recommends specific changes to resolve those problems. SIOS PERC Dashboard™ enables IT managers to quickly and easily ensure their VMware environment is optimized along four key quality of service dimensions: performance, efficiency, reliability and capacity (PERC). Provides mobile application ease of-use. The standard edition of SIOS iQ includes a variety of user enhancements, including the ability to expand charts to drill deeply into specific PERC areas, color-coded status indicators showing the criticality of issues - critical, warning and informational, and the inclusion of performance impact analysis showing all applications, VMs, hosts and data stores associated with a detected performance problem. Specialized Analytics for SQL Server provides advanced insight into performance issues associated with SQL Server deployments in VMware. SIOS iQ standard edition correlates interactions between SQL and infrastructure resources in the VMware environment to identify the deep root cause of performance issues. Enhanced Host Based Caching feature helps IT staff to easily determine how to improve storage performance for applications by using server side storage and host based caching (HBC). It analyzes the environment, including all blocks written to disk, and identifies the read ratio and the load profile to identify the VMs (and their disks) that will benefit most from HBC. SIOS iQ makes specific configuration recommendations such as how much cache to add and what cache block size to configure. It predicts the added performance that will be achieved if recommendations are implemented and shows the results in a single, easy-to-read chart. SIOS iQ Resource Optimization features. New standard edition of SIOS iQ provides an enhanced user interface for optimizing VMware resources by identifying and eliminating idle VMs and snapshot sprawl. SIOS iQ identifies under-used virtual machines and unnecessary snapshots and predicts the potential monthly savings that can be realized by eliminating them. Download a free version and/or trial Follow on Twitter account @SIOSTECH email: [email protected]
Views: 55 IT News
Open Data + Robust Workflow: Towards Reproducible Empirical Research on Organic Data
 
01:05:52
Heng Xu, thought leader on information sciences and big data, and Nan Zhang, thought leader on robustness and reliability in web data analytics, discuss organic data. This presentation is recorded as part of the University of Florida Warrington College of Business' Reliable Research in Business initiative. To watch more videos about reliable research practices, please sign up here: https://warrington.ufl.edu/reliable-research-in-business/best-practices-for-reliable-research/.
Views: 26 UFWarrington
ASC Science Sundays: Matthew Sullivan - Understanding Ocean Viruses
 
52:08
The Ohio State University Science Sundays series presents Matthew Sullivan - Understanding ocean viruses may just save the earth and help cure your next ailment.
UW Allen School Colloquium: Tim Althoff (Stanford University)
 
57:52
Data Science for Human Well-being Abstract: The popularity of wearable and mobile devices, including smartphones and smartwatches, has generated an explosion of detailed behavioral data. These massive digital traces provides us with an unparalleled opportunity to realize new types of scientific approaches that provide novel insights about our lives, health, and happiness. However, gaining valuable insights from these data requires new computational approaches that turn observational, scientifically "weak" data into strong scientific results and can computationally test domain theories at scale. In this talk, I will describe novel computational methods that leverage digital activity traces at the scale of billions of actions taken by millions of people. These methods combine insights from data mining, social network analysis, and natural language processing to generate actionable insights about our physical and mental well-being. Specifically, I will describe how massive digital activity traces reveal unknown health inequality around the world, and how personalized predictive models can target personalized interventions to combat this inequality. I will demonstrate that modelling how fast we are using search engines enables new types of insights into sleep and cognitive performance. Further, I will describe how natural language processing methods can help improve counseling services for millions of people in crisis. I will conclude the talk by sketching interesting future directions for computational approaches that leverage digital activity traces to better understand and improve human well-being. Bio: Tim Althoff is a Ph.D. candidate in Computer Science in the Infolab at Stanford University, advised by Jure Leskovec. His research advances computational methods to improve human well-being, combining techniques from Data Mining, Social Network Analysis, and Natural Language Processing. Prior to his PhD, Tim obtained M.S. and B.S. degrees from Stanford University and University of Kaiserslautern, Germany. He has received several fellowships and awards including the SAP Stanford Graduate Fellowship, Fulbright scholarship, German Academic Exchange Service scholarship, the German National Merit Foundation scholarship, and a Best Paper Award by the International Medical Informatics Association. Tim's research has been covered internationally by news outlets including BBC, CNN, The Economist, The Wall Street Journal, and The New York Times. April 17, 2018 This video is CC.
Unique Scientific Opportunities for the PMI National Research Cohort - April 28-29 - Day 2
 
02:58:05
NIH hosted a public workshop on the NIH campus in Bethesda, Maryland, April 28-29, 2015, to consider visionary biomedical questions that could be addressed by the proposed national research cohort of one million or more volunteer participants. The workshop will result in a series of use cases describing the distinctive science that the cohort could enable in the near term and longer term. This workshop is one of four that is being convened by the Precision Medicine Initiative Working Group of the Advisory Committee to the (NIH) Director to help inform the vision for building the PMI national participant group that they have been tasked to develop. For more information on the workshop and PMI, visit http://www.nih.gov/precisionmedicine Agenda and time codes: Welcome - Bray Patrick Lake - 00:01 Near-Term Use Cases - Dr. Kathy Hudson - 02:54 Longer-Term Use Cases - Dr. Sachin Kheterpal - 1:35:45 Recap and Next Steps - Dr. Rick Lifton - 2:51:25
Webinar: Human Skin Microflora: DNA Sequence-Based Approach to Examining Hand Disease
 
45:12
October 15, 2009. The skin creates a barrier between the body and the environment. Using animal models, Dr. Julie Segre's laboratory focuses on the genetic pathways involved in building and repairing this skin barrier. The Segre laboratory estimates that approximately one million bacteria reside on each square centimeter of skin and many common skin conditions are associated with both impaired skin barrier function and increased microbial colonization. Dr. Segre moderated the discussion, answered questions and addressed comments. In addition, the webinar discussed details of the Human Microbiome Project. More: http://www.genome.gov/27535715
LAK18 #LAK18 - Learning Analytics and Knowledge Conference, Sydney, edQuire - CEO Dr Michael Cejnar
 
18:15
From the LAK18 conference “Learning Analytics and Knowledge Conference ‘Schools Day’. Dr Michael Cejnar CEO from edQuire highlights the results from an 8 week trial of student computer learning in the classroom. The process involved using edQuire to collect, analyse and display powerful data to teachers and students (with outstanding results).
Public Hearing on U.S. Trade Deficits
 
07:38:33
The Department of Commerce and the United States Trade Representative are holding a public hearing on Thursday, May 18, at the U.S. Department of Commerce in Washington D.C., at 9:30 am. The Trump administration is analyzing the causes of America’s persistent and massive trade deficits. U.S. Secretary of Commerce Wilbur Ross is asking for input from American stakeholders on the factors that contribute to the more than $500-billion-annual goods and services trade deficit facing the United States. Read more on Trade.gov: https://blog.trade.gov/2017/05/05/provide-input-on-the-united-states-trade-deficit/ Federal Register Notice: https://www.regulations.gov/document?D=ITA-2017-0003-0001
WEBINAR: Section 188, the Nondiscrimination Provisions of WIOA, Part 4 of 4
 
01:23:13
View this webinar archive to learn about best practices in Section 188 implementation and compliance. As part of the LEAD Center’s webinar series on WIOA from a Disability Perspective, this webinar focuses on Section 188 of the Workforce Innovation and Opportunity Act (WIOA), which prohibits discrimination against people who apply to, participate in, work for, or come into contact with programs and activities of the workforce development system. Specifically, Section 188 prohibits discrimination based on race, color, religion, sex, national origin, age, disability, sexual orientation, or political affiliation or beliefs. (Section 188 of WIOA contains provisions identical to those in Section 188 of WIA.) WIOA requires that American Job Centers be fully accessible and offer necessary accommodations to provide job seekers with disabilities effective and meaningful participation in the use of skills training and career pathways for 21st century jobs. The U.S. Department of Labor issued Promising Practices in Achieving Universal Access and Equal Opportunity: A Section 188 Disability Reference Guide on July 6, 2015. This 188 Guide provides updated information and technical assistance to help American Job Centers/One-Stops meet the nondiscrimination and accessibility requirements for individuals with disabilities in Section 188 of the Workforce Investment Act and its implementing regulations. The 188 Guide, which will be discussed during the webinar, was developed to provide AJCs with promising practices that correlate with specific nondiscrimination requirements in Section 188 and includes examples of promising practices that can help promote equal access for individuals with disabilities to the American Job Center system and services.
Views: 374 LEADCtr
APS Award Address: Bringing Intelligence to Life
 
58:48
At the 2015 APS Annual Convention, APS James McKeen Cattell Fellow Ian J. Deary discussed using the Scottish Mental Surveys, how intelligence test scores relate to aspects of people’s lives and stories of participants from the studies.
Views: 1624 PsychologicalScience
The Human Microbiome: Emerging Themes at the Horizon of the 21st Century (Day 2)
 
07:32:24
The Human Microbiome: Emerging Themes at the Horizon of the 21st Century (Day 2) Air date: Thursday, August 17, 2017, 8:15:00 AM Category: Conferences Runtime: 07:32:24 Description: The 2017 NIH-wide microbiome workshop will strive to cover advances that reveal the specific ways in which the microbiota influences the physiology of the host, both in a healthy and in a diseased state and how the microbiota may be manipulated, either at the community, population, organismal or molecular level, to maintain and/or improve the health of the host. The goal will be to seek input from a trans-disciplinary group of scientists to identify 1) knowledge gaps, 2) technical hurdles, 3) new approaches and 4) research opportunities that will inform the development of novel prevention and treatment strategies based on host/microbiome interactions over the next ten years. Author: NIH Permanent link: https://videocast.nih.gov/launch.asp?23423
Views: 1447 nihvcast
DEF CON 21 - Noah Schiffman and SkyDog - The Dark Arts of OSINT
 
48:27
The Dark Arts of OSINT NOAH SCHIFFMAN SKYDOG The proliferation and availability of public information has increased with the evolution of its dissemination. With the constant creation of digital document archives and the migration towards a paperless society, vast databases of information are continuously being generated. Collectively, these publicly available databases contain enough specific information to pose certain vulnerabilities. The actionable intelligence ascertained from these data sources is known as Open Source Intelligence (OSINT). Numerous search techniques and applications exist to harvest data for OSINT purposes. Advanced operator use, social network searches, geospatial data aggregation, network traffic graphs, image specific searches, metadata extractors, and government databases, provide a wealth of useful data. Furthermore, applications such as FOCA, Maltego, and SearchDiggity, in addition to custom site API integration, yield powerful search queries with organized results. Fluency in OSINT methodologies is essential for effective online reconnaissance, although a true mastery requires further mathematical investigation. The use of statistical correlation can often reveal hidden data relationships. Linkage attacks, inferential analysis, and deductive disclosure can exploit improperly sanitized data sets. These techniques can ultimately lead to data re-identification and de-anonymization, thus exposing personal information for exploitation. We will demonstrate our mathematical algorithm for data identification by attacking publically available anonymized datasets and revealing hidden personal information. Noah Schiffman An IT industry veteran, with 20+ years of experience, Dr. Noah Schiffman is a former black-hat hacker turned security consultant. He spent almost a decade as a career computer hacker, performing penetration testing, social engineering, corporate espionage, digital surveillance, and other ethically questionable projects. Subsequently, he worked as a security consultant, teaching network defense, giving talks, and writing about information security. His past clients have consisted of Fortune 500 companies and various government agencies. For the past several years, his R&D efforts in the commercial and defense sectors have covered areas of data analysis and pattern recognition for security applications. SkyDog (@skydogcon) With 20+ years of experience in network security and computer science, Skydog possesses a unique skillset of technological diversity and depth. His accomplishments range from the design and support of enterprise level system architectures, to developing custom security products and solutions. As an industry leader in the hacker community, his expertise in vulnerability assessment and exploitation, provide him with valuable insight for developing security strategies. He is responsible for establishing and running several Information Security conferences, including Outerz0ne and SkyDogCon. Working for Vanderbilt University, he spends his time researching security, performing data recovery services, and managing 100+ terabytes of storage.
Views: 1664 DEFCONConference
Psych 25: Chapter 1
 
01:30:31
Lifespan Psychology: An Introduction
Views: 3284 Devon Fell
Mobile and Personal Technologies in Precision Medicine Workshop - July 27-28, 2015 - Day 2
 
03:21:00
On July 27 and July 28, the Precision Medicine Initiative (PMI) Working Group of the Advisory Committee to the NIH Director (ACD) hosted a public workshop on the scientific, methodological and practical considerations to inform the incorporation of mobile and personal technologies in the national research cohort of one million or more volunteers. The workshop will be was held at the Intel Corporation campus in Santa Clara, California, and was videocast. This workshop built on the unique scientific questions developed during the April 28–29 workshop, digital health data perspectives shared during the May 28-29 workshop, and the participant engagement and health equity discussions at the July 1-2 workshop. A full list of workshops convened by the ACD PMI Working Group is available on the Events page of the NIH PMI website. Agenda and time codes: Welcome - Mr. Eric Dishman - 00:03 Technical Challenges in Using Mobile Technologies in a PMI Cohort - Mr. Eric Dishman - 00:32 Precision Medicine Initiative Cohort Potential Pilot Platforms - Dr. Sachin Kheterpal - 1:39:32 Mobile Technologies in Precision Medicine Initiative Cohort Use Cases - Ms. Sue Siegel - 2:22:58 Meeting Wrap-up - Dr. Kathy Hudson - 3:07:30