Wednesday, 31 August 2016

ggplot2 - limit Y axis in R ggplot

I have tried several things, but none of them worked... The thing is that I want to create a bar graph in R. The Y axis starts in 0 but I want it starts in, let's say, 300. I have tried with ylim(300,1200), but the bars dissapear. How can I do this? thanks a lot!



size <- c(rep("peque", 2), rep("grande", 2))
rt <- c(900,910, 1000,1200)
data<- data.frame(size,rt)
barra<-ggplot(data, aes(size,rt))
barra + stat_summary(fun.y= mean, geom="bar")

php - Mysqli SELECT INSERT and UPDATE Query simulatenously

Hello I have a Database named as "admin" in which i have two tables
Table 1 Name = "register"
Table 2 Name = "noti"



In Register Table i've approx more than 10+ User entries which comes through Registration Page
In Noti Table, its empty at this time (Column Name is also "noti")



I want to perform this thing
First I want to count the total no. of records in "register" table
and it checks, if the records are greater than ZERO then it runs the INSERT query otherwise it runs the UPDATE Query




And i want to INSERT and UPDATE that count value into "noti" table



Here's my code



include('config.php');
$sql2 = "SELECT count(*) as count FROM register";
$result2 = mysqli_query($con, $sql2);
if($result2->num_rows>0)

{
while($rw1=$result2->fetch_array())
{
$value1 = $rw1['count'];

$result = mysqli_query($con, "SELECT count(*) as count FROM register ");

if(!empty($value1)) {
mysqli_query($con, "UPDATE noti SET noti = '$value1' ");
}

else
{
mysqli_query($con, "INSERT INTO noti(noti) VALUES ('$value1') ");
}
}
}
?>

php - How to submit a post-method form to same get-url in different function in CodeIgniter?




I found CodeIgniter form validation to show error message with load->view method, and will lost field error message if use "redirect".



Currently I use one function to show form page, and another function to deal form post.




class Users extends CI_Controller {
function __construct() {
parent::__construct();
}


public function sign_up()
{
$this->load->view('users/sign_up');
}

public function do_sign_up(){
$this->form_validation->set_rules('user_login', 'User Name', 'trim|required|is_unique[users.login]');
$this->form_validation->set_rules('user_email', 'Email', 'trim|required|valid_email|is_unique[users.email]');


if ($this->form_validation->run() == FALSE) {
$this->load->view('users/sign_up');
}else {
// save post user data to users table
redirect_to("users/sign_in");
}





When form validation failed, url in browser will changed to "/users/do_sign_up", I want to keep same url in sign_up page.



Use redirect("users/sign_up") method in form validation failed will keep same url, but validation error message will lost.



in Rails, I cant use routes to config like this:




get "users/sign_up" => "users#signup"
post "users/sign_up" => "users#do_signup"


Answer



imho it's not necessary to check the request method because if the user 'GET' to the page you want to show the sign up view... if they user 'POST' to the page and fails validation you ALSO want to show the sign up view. You only won't want to show the sign up view when the user 'POST' to the page and passes validation.



imho here's the most elegant way to do it in CodeIgniter:



public function sign_up()
{
// Setup form validation
$this->form_validation->set_rules(array(
//...do stuff...

));

// Run form validation
if ($this->form_validation->run())
{
//...do stuff...
redirect('');
}

// Load view

$this->load->view('sign_up');
}


Btw this is what im doing inside my config/routes.php to make my CI become RoR-like. Remember that your routes.php is just a normal php file so u can put a switch to generate different routes depending on the request method.



switch ($_SERVER['REQUEST_METHOD'])
{
case 'GET':
$route['users/sign_up'] = "users/signup";

break;
case 'POST':
$route['users/sign_up'] = "users/do_signup";
break;
}

c++ - Definition of variable inside loop




I was looking for an answer for my questions on many pages but I couldn't find it.



In this case we are defining variable and reinitialaze it in every loop:



while(1)
int k = 7;


In this case we are defining variable before the loop and reinitialaze it in every loop.




int k;
while(1)
k = 7;


There is any advantages or disadvantages of using both methods? Or maybe it don't make a difference?


Answer



The difference is in terms of scope of the variable.




In the first case, once the while loop ends, the variable k cannot be accessed.



In the second case, the variable k can be accessed out of the while loop.



In both cases, the variable is defined on the stack (or as TartanLlama points out, they could be allocated in registers) and so there is no difference in terms of performance.



However, the example you've used is wrong in the case that the while loop will never end. I'm guessing this is just a piece of dummy code to explain the situation.


java - Check if String starts with given characters regardless of upper case, lower case




So for input:




arrondissement d


I Should get output:



Arrondissement de Boulogne-sur-Mer
Arrondissement Den Bosch


So it should give back both results. So in below code I've capitalized every first character of the word but this isn't correct because some words do not start with an upper case.




public ArrayList getAllCitiesThatStartWithLetters(String letters) {
ArrayList filteredCities = new ArrayList<>();

if (mCities != null) {
for (City city : mCities) {
if (city.getName().startsWith(new capitalize(letters))) {
filteredCities.add(city);
}
}

}
return filteredCities;
}

public String capitalize(String capString){
StringBuffer capBuffer = new StringBuffer();
Matcher capMatcher = Pattern.compile("([a-z])([a-z]*)", Pattern.CASE_INSENSITIVE).matcher(capString);
while (capMatcher.find()){
capMatcher.appendReplacement(capBuffer, capMatcher.group(1).toUpperCase() + capMatcher.group(2).toLowerCase());
}


return capMatcher.appendTail(capBuffer).toString();
}

Answer



String has a very useful regionMatches method with an ignoreCase parameter, so you can check if a region of a string matches another string case insensitively.



String alpha = "My String Has Some Capitals";
String beta = "my string";
if (alpha.regionMatches(true, 0, beta, 0, beta.length())) {

System.out.println("It matches");
}

How to unzip a zip folder using php code




I want to unzip directorires.



That means i have a zipped directory name with xxx.zip.In manuall i right clicked on the zipped directory and do extract folder,then i got an unzipped dirtectory with name xxx and also in this directory have a same directory xxx.In the subfolder will contained the files.



That means, xxx->xxx->files this is the heirarchi of unzipped folder.



So in my site i want to unzipp a directory using php code.



How can i do this? I need only xxx->files not xxx->xxx->files structure.




How can i do this?


Answer



$zip = zip_open("zip.zip");
if ($zip) {
while ($zip_entry = zip_read($zip)) {
$fp = fopen("zip/".zip_entry_name($zip_entry), "w");
if (zip_entry_open($zip, $zip_entry, "r")) {
$buf = zip_entry_read($zip_entry, zip_entry_filesize($zip_entry));

fwrite($fp,"$buf");
zip_entry_close($zip_entry);
fclose($fp);
}
}
zip_close($zip);
}
?>

PHP Parse error: syntax error, unexpected T_CONSTANT_ENCAPSED_STRING, expecting ',' or ';' in C:apache2triadhtdocsimagedisplay.php on line 28

hi i am getting an error during my execution of the code : PHP Parse error: syntax error, unexpected T_CONSTANT_ENCAPSED_STRING, expecting ',' or ';' in C:\apache2triad\htdocs\imagedisplay.php on line 28




$dir= "C:\apache2triad\htdocs\phppgadmin\images\phpimages";

$file_display= array('jpg', 'jpeg', 'png', 'gif');


if(file_exists($dir)== false)
{
echo "directory x not found";
}
else
{
$dir_content= scandir($dir);

foreach($dir_content as $file)

{
$file_type = strtolower(end(explode('.', $file)));

// echo "$file
";

if($file !=='.' && $file !=='..')
{
//echo "$file
";
echo "', $file, '";
}

}
}
?>


please help

c++ - LNK2019: unresolved error in singletone



i need help to figure out what wrong about that code:



class DatabaseEngine
{
protected:
DatabaseEngine();
static DatabaseEngine* m_DatabaseEngine;
public:

static DatabaseEngine& instance();
void do_something();
};


cpp:



#include "databaseengine.h"

DatabaseEngine* DatabaseEngine::m_DatabaseEngine=nullptr;


DatabaseEngine::DatabaseEngine()
{
}


static DatabaseEngine& DatabaseEngine:: instance()
{
if(m_DatabaseEngine==nullptr)
{

m_DatabaseEngine=new DatabaseEngine;`enter code here`
}
return *m_DatabaseEngine;
}

void DatabaseEngine::do_something()
{

}



userwindow.cpp:



#include "databaseengine.h"
UsersWindow::UsersWindow(QWidget *parent) :
QWidget(parent),
ui(new Ui::UsersWindow)
{
ui->setupUi(this);
DatabaseEngine::instance().do_something();

}

UsersWindow::~UsersWindow()
{
delete ui;
}


userswindow.obj:-1: error: LNK2019: unresolved external symbol "public: static class DatabaseEngine & __cdecl DatabaseEngine::instance(void)" (?instance@DatabaseEngine@@SAAAV1@XZ) referenced in function "public: __thiscall UsersWindow::UsersWindow(class QWidget *)" (??0UsersWindow@@QAE@PAVQWidget@@@Z)




userswindow.obj:-1: error: LNK2019: unresolved external symbol "public: void __thiscall DatabaseEngine::do_something(void)" (?do_something@DatabaseEngine@@QAEXXZ) referenced in function "public: __thiscall UsersWindow::UsersWindow(class QWidget *)" (??0UsersWindow@@QAE@PAVQWidget@@@Z)



thanks


Answer



I think you need to remove the static keyword from your static function definition:



Wrong:



static DatabaseEngine& DatabaseEngine::instance()



Correct:



DatabaseEngine& DatabaseEngine::instance()

php - how to update table row data with unique id?

code:



if(isset($_POST['save']))

{
$comment1 = $_POST['comment2'].",".date('Y-m-d');
$comment2 = $_POST['comment2'];
$id = $_POST['id'];
$query = "update enquires2 set comment1 = '$comment1', comment2 = '$comment2', s_date = '$s_datee' where id='$id'";
$result = mysqli_query($link,$query);
if($result==true)
{
echo "successfull";
}

else
{
echo "error!";
}
}
?>









$sql = "select * from enquires2 ";
$result = mysqli_query($link,$sql);
while ($row = mysqli_fetch_array($result))
{
?>










}
?>

comment1 comment2 Action

'>











enter image description here



In this code I want to update table enquires2 with unique id. In following image you see that table row having save button this is only one row similarly it have multiple row which having save button in each row. Now I want that when I click on save button of particular row only that row data will be update. How can I fix this problem ? Please help.



Thank You

mysql - php syntax error, unexpected T_VARIABLE, expecting ',' or ';' on line 29

I'm trying to echo my information from my database in a simple blog.
Now it just won't work. Whatever I try.
I'm trying to figure it out myself but I am stuck behind a single error.




php syntax error, unexpected T_VARIABLE, expecting ',' or ';' on line 29



I just can't find a solution for it..
Hope you guys can help me. I am getting pretty insane of being stuck for hours here.



require('config.inc.php');
require('template.inc.php');
require('functions.inc.php');

$db_host = "***********";

$db_username = "************0";
$db_pass = "*********";
$db_name = "****************";

@mysql_connect("$db_host","$db_username","$db_pass") or die ("could not connect to mysql");
@mysql_select_db("$db_name") or die ("no database");

$title=$_POST['title'];
$contents=$_POST['contents'];
$author=$_POST['author'];

$date=$_POST['date'];
$date = strftime("%b %d, %y", strtotime($date));

$sqlcreate = mysql_query("INSERT INTO blog (date, title, contents, author)
VALUES(now(),'$title','$contents','$author')");
$query="SELECT * FROM tablename";
$result=mysql_query($query);
htmlOpenen('Voeg nieuwe post toe');
while ($result=mysql_query($query) ) {
echo'


'$result['title'];'


'$result['date'];'


'$result['contents'];'


'$result['author'];'


';
}
htmlSluiten();
mysql_close();

browser - Why does HTML think “chucknorris” is a color?



How come certain random strings produce colors when entered as background colors in HTML? For example:






 test 





...produces a document with a red background across all browsers and platforms.



Interestingly, while chucknorri produces a red background as well, chucknorr produces a yellow background.




What's going on here?


Answer



It's a holdover from the Netscape days:




Missing digits are treated as 0[...]. An incorrect digit is simply interpreted as 0. For example the values #F0F0F0, F0F0F0, F0F0F, #FxFxFx and FxFxFx are all the same.




It is from the blog post A little rant about Microsoft Internet Explorer's color parsing which covers it in great detail, including varying lengths of color values, etc.




If we apply the rules in turn from the blog post, we get the following:




  1. Replace all nonvalid hexadecimal characters with 0's



    chucknorris becomes c00c0000000

  2. Pad out to the next total number of characters divisible by 3 (11 -> 12)




    c00c 0000 0000

  3. Split into three equal groups, with each component representing the corresponding colour component of an RGB colour:



    RGB (c00c, 0000, 0000)

  4. Truncate each of the arguments from the right down to two characters




Which gives the following result:




RGB (c0, 00, 00) = #C00000 or RGB(192, 0, 0)


Here's an example demonstrating the bgcolor attribute in action, to produce this "amazing" colour swatch:

















chuck norris Mr T ninjaturtle
sick crap grass






This also answers the other part of the question; why does bgcolor="chucknorr" produce a yellow colour? Well, if we apply the rules, the string is:



c00c00000 => c00 c00 000 => c0 c0 00 [RGB(192, 192, 0)]


Which gives a light yellow gold colour. As the string starts off as 9 characters, we keep the second C this time around hence it ends up in the final colour value.




I originally encountered this when someone pointed out you could do color="crap" and, well, it comes out brown.


forms - What's the proper value for a checked attribute of an HTML checkbox?



We all know how to form a checkbox input in HTML:






What I don't know -- what's the technically correct value for a checked checkbox? I've seen these all work:














Is the answer that it doesn't matter? I see no evidence for the answer marked as correct here from the spec itself:




Checkboxes (and radio buttons) are on/off switches that may be toggled
by the user. A switch is "on" when the control element's checked
attribute is set. When a form is submitted, only "on" checkbox
controls can become successful. Several checkboxes in a form may share
the same control name. Thus, for example, checkboxes allow users to
select several values for the same property. The INPUT element is used
to create a checkbox control.




What would a spec writer say is the correct answer? Please provide evidence-based answers.


Answer



Strictly speaking, you should put something that makes sense - according to the spec here, the most correct version is:






For HTML, you can also use the empty attribute syntax, checked="", or even simply checked (for stricter XHTML, this is not supported).



Effectively, however, most browsers will support just about any value between the quotes. All of the following will be checked:










And only the following will be unchecked:






See also this similar question on disabled="disabled".


javascript - AngularJS : Initialize service with asynchronous data



I have an AngularJS service that I want to initialize with some asynchronous data. Something like this:



myModule.service('MyService', function($http) {
var myData = null;

$http.get('data.json').success(function (data) {

myData = data;
});

return {
setData: function (data) {
myData = data;
},
doStuff: function () {
return myData.getSomeData();
}

};
});


Obviously this won't work because if something tries to call doStuff() before myData gets back I will get a null pointer exception. As far as I can tell from reading some of the other questions asked here and here I have a few options, but none of them seem very clean (perhaps I am missing something):



Setup Service with "run"



When setting up my app do this:




myApp.run(function ($http, MyService) {
$http.get('data.json').success(function (data) {
MyService.setData(data);
});
});


Then my service would look like this:



myModule.service('MyService', function() {

var myData = null;
return {
setData: function (data) {
myData = data;
},
doStuff: function () {
return myData.getSomeData();
}
};
});



This works some of the time but if the asynchronous data happens to take longer than it takes for everything to get initialized I get a null pointer exception when I call doStuff()



Use promise objects



This would probably work. The only downside it everywhere I call MyService I will have to know that doStuff() returns a promise and all the code will have to us then to interact with the promise. I would rather just wait until myData is back before loading the my application.



Manual Bootstrap




angular.element(document).ready(function() {
$.getJSON("data.json", function (data) {
// can't initialize the data here because the service doesn't exist yet
angular.bootstrap(document);
// too late to initialize here because something may have already
// tried to call doStuff() and would have got a null pointer exception
});
});



Global Javascript Var
I could send my JSON directly to a global Javascript variable:



HTML:






data.js:




var dataForMyService = { 
// myData here
};


Then it would be available when initializing MyService:



myModule.service('MyService', function() {
var myData = dataForMyService;
return {

doStuff: function () {
return myData.getSomeData();
}
};
});


This would work too, but then I have a global javascript variable which smells bad.



Are these my only options? Are one of these options better than the others? I know this is a pretty long question, but I wanted to show that I have tried to explore all my options. Any guidance would greatly be appreciated.



Answer



Have you had a look at $routeProvider.when('/path',{ resolve:{...}? It can make the promise approach a bit cleaner:



Expose a promise in your service:



app.service('MyService', function($http) {
var myData = null;

var promise = $http.get('data.json').success(function (data) {

myData = data;
});

return {
promise:promise,
setData: function (data) {
myData = data;
},
doStuff: function () {
return myData;//.getSomeData();

}
};
});


Add resolve to your route config:



app.config(function($routeProvider){
$routeProvider
.when('/',{controller:'MainCtrl',

template:'
From MyService:
{{data | json}}
',
resolve:{
'MyServiceData':function(MyService){
// MyServiceData will also be injectable in your controller, if you don't want this you could create a new promise with the $q service
return MyService.promise;
}
}})
}):



Your controller won't get instantiated before all dependencies are resolved:



app.controller('MainCtrl', function($scope,MyService) {
console.log('Promise is now resolved: '+MyService.doStuff().data)
$scope.data = MyService.doStuff();
});


I've made an example at plnkr: http://plnkr.co/edit/GKg21XH0RwCMEQGUdZKH?p=preview


javascript - Should I refrain from handling Promise rejection asynchronously?



I have just installed Node v7.2.0 and learned that the following code:



var prm = Promise.reject(new Error('fail'));



results in this message:;



(node:4786) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: fail
(node:4786) DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.


I understand the reasoning behind this as many programmers have probably experienced the frustration of an Error ending up being swallowed by a Promise. However then I did this experiment:



var prm = Promise.reject(new Error('fail'));


setTimeout(() => {
prm.catch((err) => {
console.log(err.message);
})
},
0)


which results in:




(node:4860) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: fail
(node:4860) DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
(node:4860) PromiseRejectionHandledWarning: Promise rejection was handled asynchronously (rejection id: 1)
fail


I on basis of the PromiseRejectionHandledWarning assume that handling a Promise rejection asynchronously is/might be a bad thing.



But why is that?


Answer




"Should I refrain from handling Promise rejection asynchronously?"



Those warnings serve an important purpose but to see how it all works see those examples:



Try this:



process.on('unhandledRejection', () => {});
process.on('rejectionHandled', () => {});

var prm = Promise.reject(new Error('fail'));


setTimeout(() => {
prm.catch((err) => {
console.log(err.message);
})
}, 0);


Or this:




var prm = Promise.reject(new Error('fail'));
prm.catch(() => {});

setTimeout(() => {
prm.catch((err) => {
console.log(err.message);
})
}, 0);



Or this:



var var caught = require('caught');
var prm = caught(Promise.reject(new Error('fail')));

setTimeout(() => {
prm.catch((err) => {
console.log(err.message);
})
}, 0);



Disclaimer: I am the author of the caught module (and yes, I wrote it for this answer).



Rationale



It was added to Node as one of the Breaking changes between v6 and v7. There was a heated discussion about it in Issue #830: Default Unhandled Rejection Detection Behavior with no universal agreement on how promises with rejection handlers attached asynchronously should behave - work without warnings, work with warnings or be forbidden to use at all by terminating the program. More discussion took place in several issues of the unhandled-rejections-spec project.



This warning is to help you find situations where you forgot to handle the rejection but sometimes you may want to avoid it. For example you may want to make a bunch of requests and store the resulting promises in an array, only to handle it later in some other part of your program.




One of the advantages of promises over callbacks is that you can separate the place where you create the promise from the place (or places) where you attach the handlers. Those warnings make it more difficult to do but you can either handle the events (my first example) or attach a dummy catch handler wherever you create a promise that you don't want to handle right away (second example). Or you can have a module do it for you (third example).



Avoiding warnings



Attaching an empty handler doesn't change the way how the stored promise works in any way if you do it in two steps:



var prm1 = Promise.reject(new Error('fail'));
prm1.catch(() => {});



This will not be the same, though:



var prm2 = Promise.reject(new Error('fail')).catch(() => {});


Here prm2 will be a different promise then prm1. While prm1 will be rejected with 'fail' error, prm2 will be resolved with undefined which is probably not what you want.



But you could write a simple function to make it work like a two-step example above, like I did with the caught module:



var prm3 = caught(Promise.reject(new Error('fail')));



Here prm3 is the same as prm1.



See: https://www.npmjs.com/package/caught



2017 Update



See also Pull Request #6375: lib,src: "throw" on unhandled promise rejections (not merged yet as of Febryary 2017) that is marked as Milestone 8.0.0:





Makes Promises "throw" rejections which exit like regular uncaught errors. [emphasis added]




This means that we can expect Node 8.x to change the warning that this question is about into an error that crashes and terminates the process and we should take it into account while writing our programs today to avoid surprises in the future.



See also the Node.js 8.0.0 Tracking Issue #10117.


Tuesday, 30 August 2016

javascript - Show/hide div when checkbox is selected

I would like to show/hide a div when a single checkbox is selected. It currently works with "Select all" but I can't get it to work with a single checkbox. Here's the code for "Select All":




JS:







HTML:



Select all



I'd like to display the div "mail_delete_button" when a single checkbox is selected and hide it when there's nothing checked. Note: My html/input field is in the form "messageform" This is my input code:







Any help would be greatly appreciated! Thanks! :)

c - Why is do...while slower than while when using gcc optimization

I was playing around with optimization for C programs and used some code for (naively) calculating primes:



#include 

int is_prime(int x) {
int divisor = 2;

if(x <= 1)
return(0);
if(x == 2)

return(1);
while(divisor * divisor <= x) {
if(x % divisor == 0)
return(0);
divisor++;
}
return(1);
}

int main(void) {

for(int i = 0; i <= 10000000; i++)
if(is_prime(i))
printf("%d is a prime number.\n", i);
}


I thought I could boost the performance a little bit by using a do...while loop instead of the while loop, so that the while condition will never be executed for an even x. So I changed the is_prime() function to this:



int is_prime(int x) {
int divisor = 2;


if(x <= 1)
return(0);
if(x == 2)
return(1);
do {
if(x % divisor == 0)
return(0);
divisor++;
} while(divisor * divisor <= x);

return(1);
}


When I compile both version without optimizations (gcc main.c -o main) time ./main > /dev/null takes about 5.5s for both, where it looks like the do..while version performs a tiny bit better (maybe ~40ms).



When I optimize (gcc main.c -O3 -o main), I get a strong difference between the two. time ./main > /dev/null gives me ~5.4s for the do...while version and ~4.9s for the while version.



How can this be explained? Is the gcc compiler just not as well optimized for do..while loops? Should I thus always use while loops instead?







My CPU is an Intel i5-4300M.



My gcc version is gcc (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0.



I have now tested the clang compiler (clang version 6.0.0-1ubuntu2 (tags/RELEASE_600/final); clang main.c -O3 -o main) as well. Here the results are as expected: time ./main > /dev/null takes about 6.5s for the do...while version and 6.8s for the while version.

character - Does Natsu Dragneel retain his ability to utilize some of the non-dragonfire elemental magic he eats?

In Fairy Tail, does Natsu retain his ability to utilize some of the non-dragonfire elemental magic he eats?


Natsu has eaten Etherion, Black Flames, and Lightning.


Does his consumption of other magics give him a chance of possibly calling them up for use in a tough battle?


How to Read JSON data from txt file in Java?




I am now want to read a series of JSON data(Nodes data) from local txt file(shown below, NODES.txt ). I use javax.json to do this.



Currently, I have a Node class which contains the attributes:type, id, geometry class(contains type, coordinates), properties class (contains name);



Here is the data I need to retrieve, it contains more than 500 nodes in it, here I just list 3 of them, so I need to use a loop to do it, I am quite new to this, please help !!



The sample JSON data in NODES.txt




[
{
"type" : "Feature",
"id" : 8005583,
"geometry" : {
"type" : "Point",
"coordinates" : [
-123.2288,
48.7578
]

},
"properties" : {
"name" : 1
}
},
{
"type" : "Feature",
"id" : 8005612,
"geometry" : {
"type" : "Point",

"coordinates" : [
-123.2271,
48.7471
]
},
"properties" : {
"name" : 2
}
},
{

"type" : "Feature",
"id" : 8004171,
"geometry" : {
"type" : "Point",
"coordinates" : [
-123.266,
48.7563
]
},
"properties" : {

"name" : 3
}
},
****A Lot In the Between****
{
"type" : "Feature",
"id" : 8004172,
"geometry" : {
"type" : "Point",
"coordinates" : [

-113.226,
45.7563
]
},
"properties" : {
"name" : 526
}
}
]


Answer



Create classes to represent the entries:



Feature.java:



import java.util.Map;

public class Node {
public String type;
public String id;

public Geometry geometry;
public Properties properties;
}


Geometry.java:



import java.util.List;

public class Geometry {

public String type;
public List coordinates;
}


Properties.java:



public class Properties {
public String name;
}



And an application main class to drive the processing.



Main.java:



import com.google.gson.Gson;
import java.io.FileReader;
import java.io.Reader;


public class Main {
public static void main(String[] args) throws Exception {
try (Reader reader = new FileReader("NODES.txt")) {
Gson gson = new Gson();
Node[] features = gson.fromJson(reader, Node[].class);
// work with features
}
}
}


Function JavaScript between popup.html and popup.js in chrome extension (taking into account message from Background)

I would like to execute function in chrome extension (in popup.js) by clicking on button in a innerHTML.



My code in popup.html is:
















My code in popup.js :



chrome.runtime.onMessage.addListener(function (msg, sender, sendResponse) {
if (msg.text === 'results') {
var panier_container = document.getElementById("panier_container");
var texte = "";
panier_container.innerHTML = texte;
});
});


function toto() {
alert("toto");
}


When I execute the code, I see the button "TOTO" but when I click on the button, nothing happen. Out of chrome.runtime.onMessage.addListener(function (msg, sender, sendResponse) { the button execute the function. But inside no.

javascript - Attaching click event to a JQuery object not yet added to the DOM




I've been having a lot of trouble attaching the click event to a JQuery object before adding it to the DOM.




Basically I have this button that my function returns, then I append it to the DOM. What I want is to return the button with its own click handler. I don't want to select it from the DOM to attach the handler.



My code is this:



createMyButton = function(data) {

var button = $('
')
.css({
'display' : 'inline',

'padding' : '0px 2px 2px 0px',
'cursor' : 'pointer'
}).append($('').attr({
//'href' : Share.serializeJson(data),
'target' : '_blank',
'rel' : 'nofollow'
}).append($('').css({
"padding-top" : "0px",
"margin-top" : "0px",
"margin-bottom" : "0px"

})));

button.click(function () {
console.log("asdfasdf");
});

return button;
}



The button that is return is unable to catch the click event. However, if I do this (after the button is added to the DOM):



$('#my-button').click(function () {
console.log("yeahhhh!!! but this doesn't work for me :(");
});


It works... but not for me, not what I want.



It seems to be related to the fact that the object is not yet a part of the DOM.




Oh! By the way, I'm working with OpenLayers, and the DOM object that I'm appending the button to is an OpenLayers.FramedCloud (Which is not yet a part of the DOM but will be once a couple of events are triggered.)


Answer



Use this. You can replace body with any parent element that exists on dom ready



$('body').on('click', '#my-button', function () {
console.log("yeahhhh!!! but this doesn't work for me :(");
});



Look here http://api.jquery.com/on/ for more info on how to use on() as it replaces live() as of 1.7+.



Below lists which version you should be using




$(selector).live(events, data, handler); // jQuery 1.3+



$(document).delegate(selector, events, data, handler); // jQuery 1.4.3+



$(document).on(events, selector, data, handler); // jQuery 1.7+




c++ - Using my custom iterator with stl algorithms



I'm trying to create my own iterator, and I've got it working as expected with the std::generate algorithm. However, when I try std::max_element of std::find, I get some cryptic errors.




Here is the interface for my iterator:



template           typename GridPtr,
typename GridRef,
template class ShapeT>
class GridIterator
{
public:
typedef GridIterator Iterator;


// Iterator traits - typedefs and types required to be STL compliant
typedef std::ptrdiff_t difference_type;
typedef typename GridT::Element value_type;
typedef typename GridT::Element* pointer;
typedef typename GridT::Element& reference;
typedef size_t size_type;
std::forward_iterator_tag iterator_category;



GridIterator(GridT& grid,
ShapeT shape,
Index iterStartIndex);

~GridIterator();

Iterator& operator++();
Iterator operator++(int);

typename GridT::Element& operator*();

typename GridT::Element* operator->();

bool operator!=(const GridIterator& rhs) const;
bool operator==(const GridIterator& rhs) const;
....


}



Using std::find, I get this error





In file included from /usr/include/c++/4.6/algorithm:63:0,
from ./grid/Map_Grid.h:11,
from main.cpp:4: /usr/include/c++/4.6/bits/stl_algo.h: In function ‘_IIter
std::find(_IIter, _IIter, const _Tp&) [with _IIter =
Map::GridIterator, Map::Grid*,
Map::Grid&, Map::Rectangle>, _Tp = int]’:
main.cpp:103:50: instantiated from here
/usr/include/c++/4.6/bits/stl_algo.h:4404:45: error: no matching

function for call to
‘__iterator_category(Map::GridIterator,
Map::Grid*, Map::Grid&, Map::Rectangle>&)’
/usr/include/c++/4.6/bits/stl_algo.h:4404:45: note: candidate is:
/usr/include/c++/4.6/bits/stl_iterator_base_types.h:202:5: note:
template typename std::iterator_traits::iterator_category
std::__iterator_category(const _Iter&)




With std::max_element :





In file included from /usr/include/c++/4.6/bits/char_traits.h:41:0,
from /usr/include/c++/4.6/ios:41,
from /usr/include/c++/4.6/ostream:40,
from /usr/include/c++/4.6/iostream:40,
from ./grid/Map_GridIterator.h:7,
from ./grid/Map_Grid.h:8,
from main.cpp:4: /usr/include/c++/4.6/bits/stl_algobase.h: In function ‘const _Tp&
std::max(const _Tp&, const _Tp&) [with _Tp =

Map::GridIterator, Map::Grid*,
Map::Grid&, Map::Rectangle>]’: main.cpp:102:60:
instantiated from here /usr/include/c++/4.6/bits/stl_algobase.h:215:7:
error: no match for ‘operator<’ in ‘__a < __b’
/usr/include/c++/4.6/bits/stl_algobase.h:215:7: note: candidates are:
/usr/include/c++/4.6/bits/stl_pair.h:207:5: note: template constexpr bool std::operator<(const std::pair<_T1, _T2>&,
const std::pair<_T1, _T2>&)
/usr/include/c++/4.6/bits/stl_iterator.h:291:5: note: template bool std::operator<(const std::reverse_iterator<_Iterator>&, const
std::reverse_iterator<_Iterator>&)
/usr/include/c++/4.6/bits/stl_iterator.h:341:5: note: template bool std::operator<(const std::reverse_iterator<_IteratorL>&, const
std::reverse_iterator<_IteratorR>&)

/usr/include/c++/4.6/bits/stl_iterator.h:1049:5: note: template bool std::operator<(const std::move_iterator<_IteratorL>&, const
std::move_iterator<_IteratorR>&)
/usr/include/c++/4.6/bits/stl_iterator.h:1055:5: note: template bool std::operator<(const std::move_iterator<_Iterator>&, const std::move_iterator<_Iterator>&)
/usr/include/c++/4.6/bits/basic_string.h:2510:5: note: template bool std::operator<(const std::basic_string<_CharT, _Traits, _Alloc>&, const
std::basic_string<_CharT, _Traits, _Alloc>&)
/usr/include/c++/4.6/bits/basic_string.h:2522:5: note: template bool std::operator<(const std::basic_string<_CharT, _Traits, _Alloc>&, const _CharT*)
/usr/include/c++/4.6/bits/basic_string.h:2534:5: note: template bool std::operator<(const _CharT*, const std::basic_string<_CharT, _Traits, _Alloc>&) /usr/include/c++/4.6/bits/stl_vector.h:1290:5: note: template bool std::operator<(const std::vector<_Tp, _Alloc>&, const std::vector<_Tp, _Alloc>&) /usr/include/c++/4.6/tuple:586:5: note: template bool std::operator<(const std::tuple<_TElements
...>&, const std::tuple<_Elements ...>&)



Answer




You are missing a typedef keyword for declaring an alias indicating the iterator category:



// Iterator traits - typedefs and types required to be STL compliant
//...
typedef std::forward_iterator_tag iterator_category;
~~~~~~^


Without the typedef, you are actually declaring a data member.




To avoid such mistakes, you can utilize the std::iterator class template as a base class, instead of defining those aliases on your own:



class GridIterator : public std::iterator                                        , typename GridT::Element>

r - data.table vs dplyr: can one do something well the other can't or does poorly?



Overview



I'm relatively familiar with data.table, not so much with dplyr. I've read through some dplyr vignettes and examples that have popped up on SO, and so far my conclusions are that:





  1. data.table and dplyr are comparable in speed, except when there are many (i.e. >10-100K) groups, and in some other circumstances (see benchmarks below)

  2. dplyr has more accessible syntax

  3. dplyr abstracts (or will) potential DB interactions

  4. There are some minor functionality differences (see "Examples/Usage" below)



In my mind 2. doesn't bear much weight because I am fairly familiar with it data.table, though I understand that for users new to both it will be a big factor. I would like to avoid an argument about which is more intuitive, as that is irrelevant for my specific question asked from the perspective of someone already familiar with data.table. I also would like to avoid a discussion about how "more intuitive" leads to faster analysis (certainly true, but again, not what I'm most interested about here).



Question




What I want to know is:




  1. Are there analytical tasks that are a lot easier to code with one or the other package for people familiar with the packages (i.e. some combination of keystrokes required vs. required level of esotericism, where less of each is a good thing).

  2. Are there analytical tasks that are performed substantially (i.e. more than 2x) more efficiently in one package vs. another.



One recent SO question got me thinking about this a bit more, because up until that point I didn't think dplyr would offer much beyond what I can already do in data.table. Here is the dplyr solution (data at end of Q):




dat %.%
group_by(name, job) %.%
filter(job != "Boss" | year == min(year)) %.%
mutate(cumu_job2 = cumsum(job2))


Which was much better than my hack attempt at a data.table solution. That said, good data.table solutions are also pretty good (thanks Jean-Robert, Arun, and note here I favored single statement over the strictly most optimal solution):



setDT(dat)[,
.SD[job != "Boss" | year == min(year)][, cumjob := cumsum(job2)],

by=list(id, job)
]


The syntax for the latter may seem very esoteric, but it actually is pretty straightforward if you're used to data.table (i.e. doesn't use some of the more esoteric tricks).



Ideally what I'd like to see is some good examples were the dplyr or data.table way is substantially more concise or performs substantially better.



Examples




Usage


  • dplyr does not allow grouped operations that return arbitrary number of rows (from eddi's question, note: this looks like it will be implemented in dplyr 0.5, also, @beginneR shows a potential work-around using do in the answer to @eddi's question).

  • data.table supports rolling joins (thanks @dholstius) as well as overlap joins

  • data.table internally optimises expressions of the form DT[col == value] or DT[col %in% values] for speed through automatic indexing which uses binary search while using the same base R syntax. See here for some more details and a tiny benchmark.

  • dplyr offers standard evaluation versions of functions (e.g. regroup, summarize_each_) that can simplify the programmatic use of dplyr (note programmatic use of data.table is definitely possible, just requires some careful thought, substitution/quoting, etc, at least to my knowledge)



Benchmarks




Data



This is for the first example I showed in the question section.



dat <- structure(list(id = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 
2L, 2L, 2L, 2L, 2L, 2L), name = c("Jane", "Jane", "Jane", "Jane",
"Jane", "Jane", "Jane", "Jane", "Bob", "Bob", "Bob", "Bob", "Bob",

"Bob", "Bob", "Bob"), year = c(1980L, 1981L, 1982L, 1983L, 1984L,
1985L, 1986L, 1987L, 1985L, 1986L, 1987L, 1988L, 1989L, 1990L,
1991L, 1992L), job = c("Manager", "Manager", "Manager", "Manager",
"Manager", "Manager", "Boss", "Boss", "Manager", "Manager", "Manager",
"Boss", "Boss", "Boss", "Boss", "Boss"), job2 = c(1L, 1L, 1L,
1L, 1L, 1L, 0L, 0L, 1L, 1L, 1L, 0L, 0L, 0L, 0L, 0L)), .Names = c("id",
"name", "year", "job", "job2"), class = "data.frame", row.names = c(NA,
-16L))

Answer




We need to cover at least these aspects to provide a comprehensive answer/comparison (in no particular order of importance): Speed, Memory usage, Syntax and Features.



My intent is to cover each one of these as clearly as possible from data.table perspective.




Note: unless explicitly mentioned otherwise, by referring to dplyr, we refer to dplyr's data.frame interface whose internals are in C++ using Rcpp.








The data.table syntax is consistent in its form - DT[i, j, by]. To keep i, j and by together is by design. By keeping related operations together, it allows to easily optimise operations for speed and more importantly memory usage, and also provide some powerful features, all while maintaining the consistency in syntax.



1. Speed



Quite a few benchmarks (though mostly on grouping operations) have been added to the question already showing data.table gets faster than dplyr as the number of groups and/or rows to group by increase, including benchmarks by Matt on grouping from 10 million to 2 billion rows (100GB in RAM) on 100 - 10 million groups and varying grouping columns, which also compares pandas. See also updated benchmarks, which include Spark and pydatatable as well.



On benchmarks, it would be great to cover these remaining aspects as well:




  • Grouping operations involving a subset of rows - i.e., DT[x > val, sum(y), by = z] type operations.



  • Benchmark other operations such as update and joins.


  • Also benchmark memory footprint for each operation in addition to runtime.




2. Memory usage




  1. Operations involving filter() or slice() in dplyr can be memory inefficient (on both data.frames and data.tables). See this post.





    Note that Hadley's comment talks about speed (that dplyr is plentiful fast for him), whereas the major concern here is memory.



  2. data.table interface at the moment allows one to modify/update columns by reference (note that we don't need to re-assign the result back to a variable).



    # sub-assign by reference, updates 'y' in-place
    DT[x >= 1L, y := NA]


    But dplyr will never update by reference. The dplyr equivalent would be (note that the result needs to be re-assigned):




    # copies the entire 'y' column
    ans <- DF %>% mutate(y = replace(y, which(x >= 1L), NA))


    A concern for this is referential transparency. Updating a data.table object by reference, especially within a function may not be always desirable. But this is an incredibly useful feature: see this and this posts for interesting cases. And we want to keep it.



    Therefore we are working towards exporting shallow() function in data.table that will provide the user with both possibilities. For example, if it is desirable to not modify the input data.table within a function, one can then do:



    foo <- function(DT) {
    DT = shallow(DT) ## shallow copy DT

    DT[, newcol := 1L] ## does not affect the original DT
    DT[x > 2L, newcol := 2L] ## no need to copy (internally), as this column exists only in shallow copied DT
    DT[x > 2L, x := 3L] ## have to copy (like base R / dplyr does always); otherwise original DT will
    ## also get modified.
    }


    By not using shallow(), the old functionality is retained:



    bar <- function(DT) {

    DT[, newcol := 1L] ## old behaviour, original DT gets updated by reference
    DT[x > 2L, x := 3L] ## old behaviour, update column x in original DT.
    }


    By creating a shallow copy using shallow(), we understand that you don't want to modify the original object. We take care of everything internally to ensure that while also ensuring to copy columns you modify only when it is absolutely necessary. When implemented, this should settle the referential transparency issue altogether while providing the user with both possibilties.




    Also, once shallow() is exported dplyr's data.table interface should avoid almost all copies. So those who prefer dplyr's syntax can use it with data.tables.




    But it will still lack many features that data.table provides, including (sub)-assignment by reference.



  3. Aggregate while joining:



    Suppose you have two data.tables as follows:



    DT1 = data.table(x=c(1,1,1,1,2,2,2,2), y=c("a", "a", "b", "b"), z=1:8, key=c("x", "y"))
    # x y z
    # 1: 1 a 1
    # 2: 1 a 2

    # 3: 1 b 3
    # 4: 1 b 4
    # 5: 2 a 5
    # 6: 2 a 6
    # 7: 2 b 7
    # 8: 2 b 8
    DT2 = data.table(x=1:2, y=c("a", "b"), mul=4:3, key=c("x", "y"))
    # x y mul
    # 1: 1 a 4
    # 2: 2 b 3



    And you would like to get sum(z) * mul for each row in DT2 while joining by columns x,y. We can either:




    • 1) aggregate DT1 to get sum(z), 2) perform a join and 3) multiply (or)



      # data.table way
      DT1[, .(z = sum(z)), keyby = .(x,y)][DT2][, z := z*mul][]


      # dplyr equivalent
      DF1 %>% group_by(x, y) %>% summarise(z = sum(z)) %>%
      right_join(DF2) %>% mutate(z = z * mul)

    • 2) do it all in one go (using by = .EACHI feature):



      DT1[DT2, list(z=sum(z) * mul), by = .EACHI]




    What is the advantage?




    • We don't have to allocate memory for the intermediate result.


    • We don't have to group/hash twice (one for aggregation and other for joining).


    • And more importantly, the operation what we wanted to perform is clear by looking at j in (2).




    Check this post for a detailed explanation of by = .EACHI. No intermediate results are materialised, and the join+aggregate is performed all in one go.




    Have a look at this, this and this posts for real usage scenarios.



    In dplyr you would have to join and aggregate or aggregate first and then join, neither of which are as efficient, in terms of memory (which in turn translates to speed).


  4. Update and joins:



    Consider the data.table code shown below:



    DT1[DT2, col := i.mul]



    adds/updates DT1's column col with mul from DT2 on those rows where DT2's key column matches DT1. I don't think there is an exact equivalent of this operation in dplyr, i.e., without avoiding a *_join operation, which would have to copy the entire DT1 just to add a new column to it, which is unnecessary.



    Check this post for a real usage scenario.





To summarise, it is important to realise that every bit of optimisation matters. As Grace Hopper would say, Mind your nanoseconds!




3. Syntax




Let's now look at syntax. Hadley commented here:




Data tables are extremely fast but I think their concision makes it harder to learn and code that uses it is harder to read after you have written it ...




I find this remark pointless because it is very subjective. What we can perhaps try is to contrast consistency in syntax. We will compare data.table and dplyr syntax side-by-side.



We will work with the dummy data shown below:




DT = data.table(x=1:10, y=11:20, z=rep(1:2, each=5))
DF = as.data.frame(DT)



  1. Basic aggregation/update operations.



    # case (a)
    DT[, sum(y), by = z] ## data.table syntax

    DF %>% group_by(z) %>% summarise(sum(y)) ## dplyr syntax
    DT[, y := cumsum(y), by = z]
    ans <- DF %>% group_by(z) %>% mutate(y = cumsum(y))

    # case (b)
    DT[x > 2, sum(y), by = z]
    DF %>% filter(x>2) %>% group_by(z) %>% summarise(sum(y))
    DT[x > 2, y := cumsum(y), by = z]
    ans <- DF %>% group_by(z) %>% mutate(y = replace(y, which(x > 2), cumsum(y)))


    # case (c)
    DT[, if(any(x > 5L)) y[1L]-y[2L] else y[2L], by = z]
    DF %>% group_by(z) %>% summarise(if (any(x > 5L)) y[1L] - y[2L] else y[2L])
    DT[, if(any(x > 5L)) y[1L] - y[2L], by = z]
    DF %>% group_by(z) %>% filter(any(x > 5L)) %>% summarise(y[1L] - y[2L])



    • data.table syntax is compact and dplyr's quite verbose. Things are more or less equivalent in case (a).


    • In case (b), we had to use filter() in dplyr while summarising. But while updating, we had to move the logic inside mutate(). In data.table however, we express both operations with the same logic - operate on rows where x > 2, but in first case, get sum(y), whereas in the second case update those rows for y with its cumulative sum.




      This is what we mean when we say the DT[i, j, by] form is consistent.


    • Similarly in case (c), when we have if-else condition, we are able to express the logic "as-is" in both data.table and dplyr. However, if we would like to return just those rows where the if condition satisfies and skip otherwise, we cannot use summarise() directly (AFAICT). We have to filter() first and then summarise because summarise() always expects a single value.



      While it returns the same result, using filter() here makes the actual operation less obvious.



      It might very well be possible to use filter() in the first case as well (does not seem obvious to me), but my point is that we should not have to.



  2. Aggregation / update on multiple columns




    # case (a)
    DT[, lapply(.SD, sum), by = z] ## data.table syntax
    DF %>% group_by(z) %>% summarise_each(funs(sum)) ## dplyr syntax
    DT[, (cols) := lapply(.SD, sum), by = z]
    ans <- DF %>% group_by(z) %>% mutate_each(funs(sum))

    # case (b)
    DT[, c(lapply(.SD, sum), lapply(.SD, mean)), by = z]
    DF %>% group_by(z) %>% summarise_each(funs(sum, mean))


    # case (c)
    DT[, c(.N, lapply(.SD, sum)), by = z]
    DF %>% group_by(z) %>% summarise_each(funs(n(), mean))



    • In case (a), the codes are more or less equivalent. data.table uses familiar base function lapply(), whereas dplyr introduces *_each() along with a bunch of functions to funs().


    • data.table's := requires column names to be provided, whereas dplyr generates it automatically.


    • In case (b), dplyr's syntax is relatively straightforward. Improving aggregations/updates on multiple functions is on data.table's list.


    • In case (c) though, dplyr would return n() as many times as many columns, instead of just once. In data.table, all we need to do is to return a list in j. Each element of the list will become a column in the result. So, we can use, once again, the familiar base function c() to concatenate .N to a list which returns a list.






    Note: Once again, in data.table, all we need to do is return a list in j. Each element of the list will become a column in result. You can use c(), as.list(), lapply(), list() etc... base functions to accomplish this, without having to learn any new functions.



    You will need to learn just the special variables - .N and .SD at least. The equivalent in dplyr are n() and .



  3. Joins



    dplyr provides separate functions for each type of join where as data.table allows joins using the same syntax DT[i, j, by] (and with reason). It also provides an equivalent merge.data.table() function as an alternative.




    setkey(DT1, x, y)

    # 1. normal join
    DT1[DT2] ## data.table syntax
    left_join(DT2, DT1) ## dplyr syntax

    # 2. select columns while join
    DT1[DT2, .(z, i.mul)]
    left_join(select(DT2, x, y, mul), select(DT1, x, y, z))


    # 3. aggregate while join
    DT1[DT2, .(sum(z) * i.mul), by = .EACHI]
    DF1 %>% group_by(x, y) %>% summarise(z = sum(z)) %>%
    inner_join(DF2) %>% mutate(z = z*mul) %>% select(-mul)

    # 4. update while join
    DT1[DT2, z := cumsum(z) * i.mul, by = .EACHI]
    ??


    # 5. rolling join
    DT1[DT2, roll = -Inf]
    ??

    # 6. other arguments to control output
    DT1[DT2, mult = "first"]
    ??




    • Some might find a separate function for each joins much nicer (left, right, inner, anti, semi etc), whereas as others might like data.table's DT[i, j, by], or merge() which is similar to base R.


    • However dplyr joins do just that. Nothing more. Nothing less.


    • data.tables can select columns while joining (2), and in dplyr you will need to select() first on both data.frames before to join as shown above. Otherwise you would materialiase the join with unnecessary columns only to remove them later and that is inefficient.


    • data.tables can aggregate while joining (3) and also update while joining (4), using by = .EACHI feature. Why materialse the entire join result to add/update just a few columns?


    • data.table is capable of rolling joins (5) - roll forward, LOCF, roll backward, NOCB, nearest.


    • data.table also has mult = argument which selects first, last or all matches (6).


    • data.table has allow.cartesian = TRUE argument to protect from accidental invalid joins.







Once again, the syntax is consistent with DT[i, j, by] with additional arguments allowing for controlling the output further.





  1. do()...



    dplyr's summarise is specially designed for functions that return a single value. If your function returns multiple/unequal values, you will have to resort to do(). You have to know beforehand about all your functions return value.



    DT[, list(x[1], y[1]), by = z]                 ## data.table syntax

    DF %>% group_by(z) %>% summarise(x[1], y[1]) ## dplyr syntax
    DT[, list(x[1:2], y[1]), by = z]
    DF %>% group_by(z) %>% do(data.frame(.$x[1:2], .$y[1]))

    DT[, quantile(x, 0.25), by = z]
    DF %>% group_by(z) %>% summarise(quantile(x, 0.25))
    DT[, quantile(x, c(0.25, 0.75)), by = z]
    DF %>% group_by(z) %>% do(data.frame(quantile(.$x, c(0.25, 0.75))))

    DT[, as.list(summary(x)), by = z]

    DF %>% group_by(z) %>% do(data.frame(as.list(summary(.$x))))



    • .SD's equivalent is .


    • In data.table, you can throw pretty much anything in j - the only thing to remember is for it to return a list so that each element of the list gets converted to a column.


    • In dplyr, cannot do that. Have to resort to do() depending on how sure you are as to whether your function would always return a single value. And it is quite slow.







Once again, data.table's syntax is consistent with DT[i, j, by]. We can just keep throwing expressions in j without having to worry about these things.




Have a look at this SO question and this one. I wonder if it would be possible to express the answer as straightforward using dplyr's syntax...




To summarise, I have particularly highlighted several instances where dplyr's syntax is either inefficient, limited or fails to make operations straightforward. This is particularly because data.table gets quite a bit of backlash about "harder to read/learn" syntax (like the one pasted/linked above). Most posts that cover dplyr talk about most straightforward operations. And that is great. But it is important to realise its syntax and feature limitations as well, and I am yet to see a post on it.



data.table has its quirks as well (some of which I have pointed out that we are attempting to fix). We are also attempting to improve data.table's joins as I have highlighted here.




But one should also consider the number of features that dplyr lacks in comparison to data.table.




4. Features



I have pointed out most of the features here and also in this post. In addition:




  • fread - fast file reader has been available for a long time now.



  • fwrite - a parallelised fast file writer is now available. See this post for a detailed explanation on the implementation and #1664 for keeping track of further developments.


  • Automatic indexing - another handy feature to optimise base R syntax as is, internally.


  • Ad-hoc grouping: dplyr automatically sorts the results by grouping variables during summarise(), which may not be always desirable.


  • Numerous advantages in data.table joins (for speed / memory efficiency and syntax) mentioned above.


  • Non-equi joins: Allows joins using other operators <=, <, >, >= along with all other advantages of data.table joins.


  • Overlapping range joins was implemented in data.table recently. Check this post for an overview with benchmarks.


  • setorder() function in data.table that allows really fast reordering of data.tables by reference.


  • dplyr provides interface to databases using the same syntax, which data.table does not at the moment.


  • data.table provides faster equivalents of set operations (written by Jan Gorecki) - fsetdiff, fintersect, funion and fsetequal with additional all argument (as in SQL).


  • data.table loads cleanly with no masking warnings and has a mechanism described here for [.data.frame compatibility when passed to any R package. dplyr changes base functions filter, lag and [ which can cause problems; e.g. here and here.








Finally:




  • On databases - there is no reason why data.table cannot provide similar interface, but this is not a priority now. It might get bumped up if users would very much like that feature.. not sure.


  • On parallelism - Everything is difficult, until someone goes ahead and does it. Of course it will take effort (being thread safe).





    • Progress is being made currently (in v1.9.7 devel) towards parallelising known time consuming parts for incremental performance gains using OpenMP.



PHP variable class static method call



I have a property that stores a class name as a string. I then want to use this to call a static method of said class. As far as I know, this is possible since PHP 5.3. I am running 5.6.x on a vagrant box.



I want to do this:



$item = $this->className::getItem($id);



But I get the following error:



Parse error: syntax error, unexpected '::' (T_PAAMAYIM_NEKUDOTAYIM)...


The following works fine:



$c = $this->className;
$item = $c::getItem($id);



Any idea why? Is this not the same thing?


Answer



The problem is that you are access are property from a class in the first useage, but then in the second try you are parsing the value of the class property (into $c), what is a classname as string, and this can used for static calls to static class functions. The first try, trys to access the static method on an string (the class property).



class a {
static function b(){echo'works';}
}
$a='a';
$a::b();



But the real issue of the error is, that this ->FooBar:: is an syntax error in PHP.


Interpolation (double quoted string) of Associative Arrays in PHP



When interpolating PHP's string-indexed array elements (5.3.3, Win32)
the following behavior may be expected or not:



$ha = array('key1' => 'Hello to me');

print $ha['key1']; # correct (usual way)
print $ha[key1]; # Warning, works (use of undefined constant)


print "He said {$ha['key1']}"; # correct (usual way)
print "He said {$ha[key1]}"; # Warning, works (use of undefined constant)

print "He said $ha['key1']"; # Error, unexpected T_ENCAPSED_AND_WHITESPACE
print "He said $ha[ key1 ]"; # Error, unexpected T_ENCAPSED_AND_WHITESPACE
print "He said $ha[key1]"; # !! correct (How Comes?)


Inerestingly, the last line seems to be correct PHP code. Any explanations?
Can this feature be trusted?






Edit: The point of the posting now set in bold face in order to reduce misunderstandings.

Answer



Yes, you may trust it. All ways of interpolation a variable are covered in the documentation pretty well.



If you want to have a reason why this was done so, well, I can't help you there. But as always: PHP is old and has evolved a lot, thus introducing inconsistent syntax.


timestamp - Python copy file but keep original




Python query.



I want to take a copy of a file, called randomfile.dat, and add a timestamp to the end of the copied file.



However, I want to keep the original file too. So in my current directory (no moving files) I would end up with:
randomfile.dat
randomfile.dat.201711241923 (or whatever the timestamp format is..)



Can someone advise? Anything I have tried causes me to lose the original file.



Answer



How about this?



$ ls

$ touch randomfile.dat

$ ls
randomfile.dat


$ python
[...]
>>> import time
>>> src_filename = 'randomfile.dat'
>>> dst_filename = src_filename + time.strftime('.%Y%m%d%H%M')

>>> import shutil
>>> shutil.copy(src_filename, dst_filename)
'randomfile.dat.201711241929'
>>> [Ctrl+D]


$ ls
randomfile.dat
randomfile.dat.201711241929

Monday, 29 August 2016

Namespace without a name in C++











I came across this code



namespace ABC {
namespace DEF {


namespace
{


I expected the namespace should be followed by some name, but it's not the case with this code.



Is this allowed in C++? What's the advantage for this unnamed namespace?


Answer



It's called an unnamed namespace / anonymous namespace. It's use is to make functions/objects/etc accessible only within that file. It's almost the same as static in C.



linux - Export a table as csv in mysql from shell script

I am trying to export a result set into a csv file and load it to mysql.




mysql -e "select *  from temp" > '/usr/apps/{path}/some.csv'


The out put file is not importable. It has the query, headers and bunch of unwanted lines. All I want is just the COMMA delimited VALUES in the file, so that I can import it back.



What did I try so far?




  1. Added | sed 's/\t/,/g' - Did not help

  2. Tried OUTFILE but it did not work.


  3. Tried SHOW VARIABLES LIKE "secure_file_priv" which gave null.



OUTFILE will not work for me because I get the error "The MySQL server is running with the --secure-file-priv option so it cannot execute this statement". I cannot edit the variable secure-file-priv. And it has a null value right now.



I get the file output as below image. I used the alias mysql2csv='sed '\''s/\t/","/g;s/^/"/;s/$/"/;s/\n//g'\'''



enter image description here

How to convert a Java 8 Stream to an Array?



What is the easiest/shortest way to convert a Java 8 Stream into an array?


Answer



The easiest method is to use the toArray(IntFunction generator) method with an array constructor reference. This is suggested in the API documentation for the method.



String[] stringArray = stringStream.toArray(String[]::new);


What it does is find a method that takes in an integer (the size) as argument, and returns a String[], which is exactly what (one of the overloads of) new String[] does.



You could also write your own IntFunction:



Stream stringStream = ...;
String[] stringArray = stringStream.toArray(size -> new String[size]);


The purpose of the IntFunction generator is to convert an integer, the size of the array, to a new array.



Example code:



Stream stringStream = Stream.of("a", "b", "c");
String[] stringArray = stringStream.toArray(size -> new String[size]);
Arrays.stream(stringArray).forEach(System.out::println);


Prints:



a
b
c

preferences - How do I get the SharedPreferences from a PreferenceActivity in Android?




I am using a PreferenceActivity to show some settings for my application. I am inflating the settings via a xml file so that my onCreate (and complete class methods) looks like this:



public class FooActivity extends PreferenceActivity {
@Override
public void onCreate(Bundle icicle) {
super.onCreate(icicle);
addPreferencesFromResource(R.xml.preference);
}
}



The javadoc of PreferenceActivity PreferenceFragment states that




These preferences will automatically save to SharedPreferences as the user interacts with them. To retrieve an instance of SharedPreferences that the preference hierarchy in this activity will use, call getDefaultSharedPreferences(android.content.Context) with a context in the same package as this activity.




But how I get the name of the SharedPreference in another Activity? I can only call



getSharedPreferences(name, mode)



in the other activity but I need the name of the SharedPreference which was used by the PreferenceActivity. What is the name or how can i retrieve it?


Answer



import android.preference.PreferenceManager;
SharedPreferences prefs = PreferenceManager.getDefaultSharedPreferences(this);
// then you use
prefs.getBoolean("keystring", true);



Update



According to Shared Preferences | Android Developer Tutorial (Part 13) by Sai Geetha M N,




Many applications may provide a way to capture user preferences on the
settings of a specific application or an activity. For supporting
this, Android provides a simple set of APIs.



Preferences are typically name value pairs. They can be stored as

“Shared Preferences” across various activities in an application (note
currently it cannot be shared across processes). Or it can be
something that needs to be stored specific to an activity.





  1. Shared Preferences: The shared preferences can be used by all the components (activities, services etc) of the applications.


  2. Activity handled preferences: These preferences can only be used within the particular activity and can not be used by other components of the application.





Shared Preferences:



The shared preferences are managed with the help of getSharedPreferences method of the Context class. The preferences are stored in a default file (1) or you can specify a file name (2) to be used to refer to the preferences.



(1) The recommended way is to use by the default mode, without specifying the file name



SharedPreferences preferences = PreferenceManager.getDefaultSharedPreferences(context);


(2) Here is how you get the instance when you specify the file name




public static final String PREF_FILE_NAME = "PrefFile";
SharedPreferences preferences = getSharedPreferences(PREF_FILE_NAME, MODE_PRIVATE);


MODE_PRIVATE is the operating mode for the preferences. It is the default mode and means the created file will be accessed by only the calling application. Other two modes supported are MODE_WORLD_READABLE and MODE_WORLD_WRITEABLE. In MODE_WORLD_READABLE other application can read the created file but can not modify it. In case of MODE_WORLD_WRITEABLE other applications also have write permissions for the created file.



Finally, once you have the preferences instance, here is how you can retrieve the stored values from the preferences:



int storedPreference = preferences.getInt("storedInt", 0);



To store values in the preference file SharedPreference.Editor object has to be used. Editor is a nested interface in the SharedPreference class.



SharedPreferences.Editor editor = preferences.edit();
editor.putInt("storedInt", storedPreference); // value to store
editor.commit();


Editor also supports methods like remove() and clear() to delete the preference values from the file.




Activity Preferences:



The shared preferences can be used by other application components. But if you do not need to share the preferences with other components and want to have activity private preferences you can do that with the help of getPreferences() method of the activity. The getPreference method uses the getSharedPreferences() method with the name of the activity class for the preference file name.



Following is the code to get preferences



SharedPreferences preferences = getPreferences(MODE_PRIVATE);
int storedPreference = preferences.getInt("storedInt", 0);



The code to store values is also the same as in case of shared preferences.



SharedPreferences preferences = getPreference(MODE_PRIVATE);
SharedPreferences.Editor editor = preferences.edit();
editor.putInt("storedInt", storedPreference); // value to store
editor.commit();


You can also use other methods like storing the activity state in database. Note Android also contains a package called android.preference. The package defines classes to implement application preferences UI.




To see some more examples check Android's Data Storage post on developers site.


python - What does if __name__ == "__main__": do?




What does the if __name__ == "__main__": do?



# Threading example
import time, thread

def myfunction(string, sleeptime, lock, *args):
while True:
lock.acquire()
time.sleep(sleeptime)
lock.release()

time.sleep(sleeptime)

if __name__ == "__main__":
lock = thread.allocate_lock()
thread.start_new_thread(myfunction, ("Thread #: 1", 2, lock))
thread.start_new_thread(myfunction, ("Thread #: 2", 2, lock))

Answer



Whenever the Python interpreter reads a source file, it does two things:





  • it sets a few special variables like __name__, and then


  • it executes all of the code found in the file.




Let's see how this works and how it relates to your question about the __name__ checks we always see in Python scripts.





Let's use a slightly different code sample to explore how imports and scripts work. Suppose the following is in a file called foo.py.




# Suppose this is foo.py.

print("before import")
import math

print("before functionA")
def functionA():
print("Function A")


print("before functionB")
def functionB():
print("Function B {}".format(math.sqrt(100)))

print("before __name__ guard")
if __name__ == '__main__':
functionA()
functionB()
print("after __name__ guard")





When the Python interpeter reads a source file, it first defines a few special variables. In this case, we care about the __name__ variable.



When Your Module Is the Main Program



If you are running your module (the source file) as the main program, e.g.



python foo.py



the interpreter will assign the hard-coded string "__main__" to the __name__ variable, i.e.



# It's as if the interpreter inserts this at the top
# of your module when run as the main program.
__name__ = "__main__"


When Your Module Is Imported By Another




On the other hand, suppose some other module is the main program and it imports your module. This means there's a statement like this in the main program, or in some other module the main program imports:



# Suppose this is in some other main program.
import foo


In this case, the interpreter will look at the filename of your module, foo.py, strip off the .py, and assign that string to your module's __name__ variable, i.e.



# It's as if the interpreter inserts this at the top

# of your module when it's imported from another module.
__name__ = "foo"




After the special variables are set up, the interpreter executes all the code in the module, one statement at a time. You may want to open another window on the side with the code sample so you can follow along with this explanation.



Always





  1. It prints the string "before import" (without quotes).


  2. It loads the math module and assigns it to a variable called math. This is equivalent to replacing import math with the following (note that __import__ is a low-level function in Python that takes a string and triggers the actual import):




# Find and load a module given its string name, "math",
# then assign it to a local variable called math.
math = __import__("math")




  1. It prints the string "before functionA".


  2. It executes the def block, creating a function object, then assigning that function object to a variable called functionA.


  3. It prints the string "before functionB".


  4. It executes the second def block, creating another function object, then assigning it to a variable called functionB.


  5. It prints the string "before __name__ guard".




Only When Your Module Is the Main Program





  1. If your module is the main program, then it will see that __name__ was indeed set to "__main__" and it calls the two functions, printing the strings "Function A" and "Function B 10.0".



Only When Your Module Is Imported by Another




  1. (instead) If your module is not the main program but was imported by another one, then __name__ will be "foo", not "__main__", and it'll skip the body of the if statement.




Always




  1. It will print the string "after __name__ guard" in both situations.



Summary



In summary, here's what'd be printed in the two cases:




# What gets printed if foo is the main program
before import
before functionA
before functionB
before __name__ guard
Function A
Function B 10.0
after __name__ guard



# What gets printed if foo is imported as a regular module
before import
before functionA
before functionB
before __name__ guard
after __name__ guard





You might naturally wonder why anybody would want this. Well, sometimes you want to write a .py file that can be both used by other programs and/or modules as a module, and can also be run as the main program itself. Examples:




  • Your module is a library, but you want to have a script mode where it runs some unit tests or a demo.


  • Your module is only used as a main program, but it has some unit tests, and the testing framework works by importing .py files like your script and running special test functions. You don't want it to try running the script just because it's importing the module.


  • Your module is mostly used as a main program, but it also provides a programmer-friendly API for advanced users.




Beyond those examples, it's elegant that running a script in Python is just setting up a few magic variables and importing the script. "Running" the script is a side effect of importing the script's module.







  • Question: Can I have multiple __name__ checking blocks? Answer: it's strange to do so, but the language won't stop you.


  • Suppose the following is in foo2.py. What happens if you say python foo2.py on the command-line? Why?




# Suppose this is foo2.py.

def functionA():

print("a1")
from foo2 import functionB
print("a2")
functionB()
print("a3")

def functionB():
print("b")

print("t1")

if __name__ == "__main__":
print("m1")
functionA()
print("m2")
print("t2")



  • Now, figure out what will happen if you remove the __name__ check in foo3.py:




# Suppose this is foo3.py.

def functionA():
print("a1")
from foo3 import functionB
print("a2")
functionB()
print("a3")


def functionB():
print("b")

print("t1")
print("m1")
functionA()
print("m2")
print("t2")




  • What will this do when used as a script? When imported as a module?



# Suppose this is in foo4.py
__name__ = "__main__"

def bar():
print("bar")


print("before __name__ guard")
if __name__ == "__main__":
bar()
print("after __name__ guard")

c++ - Does curly brackets matter for empty constructor?

Those brackets declare an empty, inline constructor. In that case, with them, the constructor does exist, it merely does nothing more than t...